Navigating the AI world

As we enter the era of artificial intelligence, here are some thoughts on navigating through it.
blog
AI
large language models
chatgpt
Author
Published

July 12, 2023

The release of ChatGPT1 and rise of generative artificial intelligence (GenAI), especially large language models (LLMs), has profoundly impacted the world around us in a very short period. Going forward GenAI will be embedded in our everyday life in ways that we have not anticipated before. Yüksel et al.2 have recently published an overview of AI applications from engineering design perspective.

For engineering graduates, this means that the workflows, and workplaces would be much different than what they are trained for. As highly autonomous systems are developed and deployed, we will see increased productivity and efficiency. As a result, jobs that perform mundane tasks will be eliminated and redefined. Newer roles that involve using AI systems and working alongside them would emerge. These roles will require skills related to AI programming, data analysis, as well as higher order critical thinking, and problem-solving. So how do we prepare ourselves for the eminent age of AI? Here are some thoughts.

Solid foundation in fundamental concepts, best practices

Over the coming years a plethora of AI based tools will emerge for performing most of the engineering tasks. However, in order to utilize these to the fullest, it is essential to have good understanding of the fundamentals building blocks of engineering analysis, and their underlying principles.

Understanding of relevant best practices, design codes, and implementations standards will be crucial in assessing outputs of AI systems. Mastery over the underlying principles will also make informed decisions, troubleshoot issues, and develop robust solutions (AI or not) to unique problems.

Engineering practice is not just design and analysis. There are several stages in a typical engineering project such as:

  • Project initiation
  • Requirements gathering
  • Conceptual design
  • Detailed design
  • Procurement and resource allocation
  • Construction, manufacturing, or development
  • Testing and quality assurance
  • Installation, integration, or deployment
  • Monitoring and maintenance
  • Project closure

Each stage in the project necessitates distinct skill sets, and AI can enhance productivity in certain stages. Nevertheless, engineers are fundamentally skilled problem solvers. They possess the ability to analyze problems and offer elegant solutions, which remains a timeless skill. This process involves creativity, and determining the right approach to a problem often holds greater significance than the intricate details of the solution.

In any project, it is the human engineer who is responsible for determining the structure, solution, implementation approach, and other crucial aspects. I don’t anticipate this role being replaced by AI anytime in the near future.

Continuous learning

In the years to come, continuous learning will assume an increasingly crucial role in engineering practice. In the rapidly evolving technology landscape, continuous learning is essential for engineers to stay updated with the latest developments.

Continuous learning encourages engineers to be curious, think critically, and be creative. From utilizing advanced simulation techniques to implementing cutting-edge materials, continuous learning empowers engineers to leverage the latest advancements. It allows them stay up-to-date on legal requirements, safety standards, and ethical guidelines. Most engineering projects are multidisciplinary. Broadening horizons by continuous learning allows engineers to contribute meaningfully to interdisciplinary projects and leverage cross-disciplinary expertise.

Moving forward, there will be a proliferation of customizable tools driven by AI catering to specific needs. Even today, many engineering software offer scripting facilities (such as VBA in excel). Developing skills in effectively utilizing these software will be crucial for engineers to maximize their potential and productivity.

If engineers treat AI as a magical black box, they will not be able to use it effectively. Therefore, it is important to have a basic understanding of how AI works. Unfortunately, most engineering programs do not teach this. As a result, engineers must take the initiative to learn about AI on their own.

Here is a list of resources that I found useful in this regard.

  • Neural Networks: Zero to Hero - A course by Andrej Karpathy3.
  • Artificial Intelligence: A Modern Approach - A book by Stuart Russell and Peter Norvig4.
  • Andrew Ng’s Machine Learning Collection - Courses from Coursera5.
  • Python For Beginners - Resources from python.org6. Python is the de facto language of AI.
  • Structure and Interpretation of Computer Programs JavaScript Edition - A classic by Harold Abelson et al7. Not really critical, but provides an appreciation on how computer programs work in general.

Interacting with AI

It is important to choose the right AI-based tool for your needs. Each tool has its own strengths and weaknesses, and the best way to use it will depend on the specific workflow.

The key is to experiment with different AI tools, such as ChatGPT, to get a feel for how they work. Consider the quality of their outputs, but keep an open mind for other tools. The field of AI is constantly evolving, so you don’t want to settle on one tool and use it forever. You’ll need to be adaptable and willing to try new things.

For most AI tools, the user interface is based on the chat paradigm. This means that you interact with the tool by typing in chat prompts. Understanding how to create effective chat input (called prompt engineering) will help you get the most out of AI tools.

Here are some common prompt engineering approaches you can use to improve the output from an AI tool.

  • Chain-of-thought prompting: Instead of asking the model to do a lot of work in one go, break down the problem into multiple steps.

  • Generated knowledge prompting: The model output can be greatly enhanced by providing the model with some initial knowledge about a topic, and then asking it to generate new knowledge based on that information. It also helps to provide some context to the model in order to drive it to a more usable answer.

  • Directional stimulus prompting: This involves providing the model with a specific goal or outcome, and then asking it to generate a response that will achieve that goal. In conversational AI models, providing feedback on the answer generated can help the model to improve the response.

  • Be as specific as possible in your prompts. The more specific you are, the more likely the model is to generate the output that you want.

  • Use keywords and phrases that are relevant to the task that you are trying to accomplish. This will help the model to focus on the relevant information.

  • Experiment! Be patient and iterative. It may take some trial and error to find the right prompt for a particular task.

The OpenAI Cookbook8, has a lot of resources on prompting.

Critical evaluation and understanding the risks

Critical evaluation is part of any good engineering practice, and AI tools are no exceptions. LLMs are generative AI. This means that they are not simply reproducing information that they have been taught, but are actually creating new information based on the training they have received. One common problem these models have is hallucination. LLMs can sometimes generate false or misleading information. This is known as an AI hallucination. That’s why checking the AI response on factual data is critical.

Here are some pointers on critically evaluating the response of AI models:

  • Be aware of the limitations of the AI models. No AI model is perfect. There is a tendency to generate false or misleading information. It is important to be aware of these limitations when evaluating the output of AI model.
  • LLMs are trained on massive datasets of text and code. The model output is as good as the sources they are trained on. For specialized AI tools, it is important to ratify the source of the information. Any inaccuracies in the training dataset will be manifested in the output produced.
  • Knowing the data that a model was trained on can help you evaluate its outputs. This includes information about the biases in the model, such as how old is the training data, what data was filtered out and what data was used for training.
  • LLMs can generate text that is grammatically and semantically correct. So while the answers may “sound” right, it is essential to evaluate the output with your own knowledge and judgment.
  • You can ask the LLM to provide sources for the information that it is generating.
  • The context in which the output was generated can be important for evaluating its accuracy. A response generated using a specific prompt is more likely to be accurate than if it was generated without any context.
  • Compare the output with other sources/ calculations.

Lebovitz et al.9 outline key question to ask when evaluating ai tools.

An important aspect to consider when using AI tools is copyright and ownership of data. This is especially critical when using code and designs generated by AI tools. For code generated using AI it is also essential to check for any vulnerabilities and unexpected behaviors.

The ownership of data is a complex issue. When you really think about it, data’s like the new oil. Every time we log in online, use an app, or even navigate with GPS, we’re generating tons and tons of it. Who owns all this data? Well, it’s a tricky question. Ideally, it should be us, right? After all, it’s our personal info, our habits, our likes and dislikes. But often, it ends up being companies who get their hands on it because of all those lengthy ‘Terms and Conditions’ we tend to skip over. i

Typical engineering operations generate tremendous amount of data. This data could be used to train AI models to improve products and productivity.
However, one should also be careful about potential misuse of this data. Technologies like blockchain can help in tracking data and ensuring fair use.

Ethical considerations

Main ethical concerns for use of AI revolve around bias, privacy, intellectual property, transparency, and safety. Shahab Mohaghegh has discussed this subject at length in a two part series10,11.

Here are responsible AI and ethics resources from the leading companies and organizations.

  • European Commission12
  • Google13
  • Microsoft14
  • OpenAI15
  • IBM16

Final thoughts

While AI is set to shake up every industry, it also opens up huge opportunities.

Understanding complex data (like where it comes from, how it changes over time, and how it relates to other data) is critically important for training AI systems. This creates demand for professionals who can understand and manage this kind of data.

This kind of understanding isn’t just for data scientists anymore. With data driving pretty much everything in business, being able to understand and handle complex data should be a critical skill in every engineer’s arsenal.

Engineers who want to thrive in a AI world will need to embrace AI as a tool, incorporate it into their workflow, and be aware of its limitations. They will also need to rely on their engineering acumen and human instinct to make the most of AI.

Acknowledgements

Thanks to Ashwin Rao for reading the draft and providing valuable comments.

References

1.
OpenAI. Introducing ChatGPT. https://openai.com/blog/chatgpt (2022).
2.
Yüksel, N., Börklü, H. R., Sezer, H. K. & Canyurt, O. E. Review of artificial intelligence applications in engineering design perspective. Engineering Applications of Artificial Intelligence 118, 105697 (2023).
3.
Karpathy, A. Neural networks: Zero to hero. https://karpathy.ai/zero-to-hero.html (2023).
4.
Russell, S. J. & Norvig, P. Artificial intelligence: A modern approach. (Pearson, 2021).
5.
Coursera. Andrew ng’s machine learning collection. https://www.coursera.org/collections/machine-learning (2023).
6.
Foundation, P. S. Python for beginners. https://www.python.org/about/gettingstarted/ (2023).
7.
Abelson, H., Sussman, G. J., Sussman, J., Henz, M. & Wrigstad, T. Structure and interpretation of computer programs. (The MIT Press, 2022).
8.
OpenAI. OpenAI cookbook. (2023).
9.
Lebovitz, S., Lifshitz-Assaf, H. & Levina, N. The no. 1 question to ask when evaluating AI tools. MIT SMR 64, (2023).
10.
Mohaghegh, S. AI-ethics in engineering. https://towardsdatascience.com/ai-ethics-in-engineering-65ab23af3f76 (2021).
11.
Mohaghegh, S. AI-ethics in engineering. https://towardsdatascience.com/ai-ethics-in-engineering-437ec07046a6 (2021).
12.
European Commission. Ethics guidelines for trustworthy AI | shaping europe’s digital future. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (2019).
13.
Google. Google responsible AI practices. https://ai.google/responsibility/responsible-ai-practices/ (2023).
14.
Microsoft. Responsible AI principles from microsoft. https://www.microsoft.com/en-us/ai/responsible-ai (2023).
15.
OpenAI. Safety & responsibility. https://openai.com/safety (2023).
16.
IBM. AI ethics | IBM. https://www.ibm.com/impact/ai-ethics (2023).

Citation

BibTeX citation:
@online{utikar2023,
  author = {Utikar, Ranjeet},
  title = {Navigating the {AI} World},
  date = {2023-07-12},
  url = {https://smilelab.dev//blog/posts/2023-07-12-navigating-the-ai-world},
  langid = {en}
}
For attribution, please cite this work as: