The generative AI revolution in technology

Luis Fernando Ortega, CTO and co-founder of nuvu, gave a detailed presentation focused on the topic "Generative AI: Innovation and Efficiency in Building Technology Solutions" in the August 2024 edition of CTO Insights Bogota. Below, we will elaborate on the key points addressed during his speech, showing how Generative AI is revolutionizing the field of technology.

Introduction and key concepts

Ortega began by explaining fundamental concepts of Language Modeling, highlighting the importance of tokens and embeddings in text generation. Large Language Models (LLMs) are trained with large amounts of data from the Internet, which allows them to handle autocompletion and text generation tasks with high accuracy.

Key concepts include:

  • Tokens:The minimal units of text that language models process.
  • Embeddings:Vector representations of words that capture their semantic meaning.
  • Transformers:A neural network architecture that has significantly improved the processing of long text sequences.
  • Context Window:The amount of information that a model can proces simultaneously, which is crucial for complex tasks.

Transformers and context window size

Transformers, a deep neural network architecture introduced in 2017, overcome the limitations of recurrent neural networks (RNNs) by enabling parallel processing with GPUs, making them scalable and efficient. This technology is the basis for advanced models such as GPT-4 and Claude.

Ortega explained that transformers allow handling large volumes of data simultaneously, resulting in significantly improved performance in natural language processing tasks. Furthermore, he highlighted how transformers have changed the paradigm of training and using language models, enabling efficient parallelism and unprecedented scalability.

Scaling laws and emerging skills

Scaling laws, according to Ortega, are principles that dictate how to increase the efficiency of language models as their size and the amount of training data increases. For a model with X parameters, optimal training requires approximately X * 20 tokens. These laws are crucial to understanding how to scale models effectively without compromising quality.

Ortega also discussed the emerging skills of LLMs, which demonstrates not anticipated surprising capabilities during initial training,, such as advanced contextual understanding and the generation of coherent text in multiple languages.

Creation of effective prompts

An essential aspect of text generation is the creation of effective prompts. Ortega provided examples of how to improve prompts through the use of precise language, sufficient context, and detailed revisions.

  1. Provide sufficient context:A well-contextualized prompt, such as "Write a short story set in Victorian England that tells the story of a young detective solving his first major case," results in more relevant and accurate answers.
  2. Use of precise language: Being specific in the prompts, such as "Write a 500-word informative article on the dietary needs of adult Golden Retrievers," improves the quality of responses.
  3. Prompt Variations:Trying different variations, such as "Compose a 1,000-word blog post detailing the physical and mental benefits of regular yoga practice," can lead to richer and more varied results.
  4. Review of results:Reviewing and correcting the results is essential to ensure the accuracy and relevance of the answers generated.

Hallucinations and biases

Language models can generate "hallucinations", that is, incorrect or unverified information. Ortega exemplified this with an analysis of the Justo y Buenostrategy, showing how results can be positive but still face significant challenges.

Hallucinations occur when models generate plausible but incorrect information, which can lead to serious errors in critical applications. To mitigate this problem, Ortega suggests implementing human verification and review mechanisms.

Bias is another critical challenge, as LLMs may reflect biases present in training data, such as gender or race biases. Mitigating these biases involves retrieving diverse documents, detailed refinement, and gathering constant feedback.

Evolution of Natural Language Processing (NLP)

Ortega concluded his presentation by highlighting the evolution of natural language processing (NLP), from the first RNNs to today's advanced transformers. The history of NLP shows significant progress in the ability of models to understand and generate human text, with applications ranging from chatbots to machine translation systems.

He also mentioned the importance of organizational technology maturity, including security, DevOps, data management, and services/APIs to integrate generative AI and analytics effectively into technology solutions.

Own experiences and recommendations

Finally, Ortega shared some of his own experiences, such as the classification of news and the development of healthcare applications. For example, the creation of a news viewer and test grader shows how generative AI can be applied to different sectors.

Ortega also recommended the book "Building LLMs for Production" by Louis-Francois Bouchard and Louie Peters as essential reading for those interested in bringing language models into production.


Luis Fernando Ortega's presentation highlights the relevance and transformative potential of generative AI in the creation of innovative and efficient technology solutions. Through practical examples and recommendations, Ortega demonstrates how this technology is changing the technological landscape and offers clear guidance for its effective implementation. Artificial intelligence not only redefines the boundaries of what is possible, but also opens up new opportunities for innovation and efficiency across multiple industries.

Share in: