Aside from easy tooling, another reason to start with ingestion is that it is an excellent place to begin identifying challenges and easy wins. Organizations often have surprisingly little institutional understanding of the breadth and depth of their data. Undertaking the effort to list and ingest data sources into Azure is a good first move. Just cataloging all of your data sources can help reveal the level of data maturity in your organization. That can in turn assist you in understanding what you need out of the larger cloud data solution.
Keep in mind that iterative development is simply how things are done in the cloud era. You can start with a simple pipeline that moves the data from source to sink as a Minimum Viable Product. Then, you can add more capability and maturation as you gain understanding of your data, your ingestion needs, and the ADF tool.
Your initial pipelines need not be sophisticated. They can simply serve the purpose of getting data into the lake. Once you know all the kinds of data — and how much of it — you need to ingest, you can improve on and enhance your ingestion solution. Examples of such enhancement might include refactoring pipelines to leverage ADFv2 features like parameters and variables to make your pipelines more flexible and to decouple them from specific data sources.
So where should you start on your Azure data solution? Start at the beginning!