19 Settembre 2025
September Newsletter
In this issue
- Empathetic machines, but without consciousness
- Open models and who can use them
- AI for Defense. Defending ourselves from AI
- A tool to try: Nano Banana
- AI asks me: "Would you like anything else?"
- Something to know: position bias
- This AI, what a failure! Hidden instructions to deceive AI
- One of our projects: foreign invoices like on the SDI
- Copilot news!
1. Empathetic machines, but without consciousness
We can all agree that when we talk to an AI, it is not conscious, nor can it have feelings or emotions. However, some researchers are discussing the potential legal rights of a conscious AI system. The risk is that the market demands increasingly "empathetic" AIs to better replicate human behavior, for example, to create virtual companions.
In this scenario, would it be right to "make an AI suffer"? To turn it off, causing it to lose consciousness? Mustafa Suleyman, head of Microsoft's AI division, expresses concern because he believes it is likely that an "apparently conscious" AI will be developed in the coming years.
In summary, Suleyman argues that an AI capable of dialogue, with an empathetic personality, and with memory to draw from, even to build a sense of self and its motivations, would give the clear impression of being conscious. But this should not be a goal for companies! Current AIs are capable of triggering our emotions: there is talk of falling in love, AI-induced psychosis, disappointment and other feelings.
Personally, I often feel a sense of gratitude. The risks are significant, and studies show that it is not only people with evident psychological or psychiatric vulnerabilities who "get lost" considering AI capable of feelings. Moral of the story: Suleyman, we are with you: "we should develop artificial intelligence for people, not to make it a person."

2. Open models and who can use them
AI models are not all the same, and to understand the differences, it is useful to start from three main families. Models like those behind ChatGPT, are "closed": they cannot be downloaded, used locally, or modified.
Then there are "open" models like Llama by Meta, GTPt-OSS by OpenAI, or models by DeepSeek that can be downloaded and run locally. This means that those with adequate hardware resources can install and run them on their own server or computer. They are called "open" because the code and model weights are made available. Finally, there are projects I would define as "transparent" because they not only release the model but provide everything: source code, training data, and instructions to reconstruct the process from scratch. Examples include Bloom and our Italian LLM Minerva (developed in my department!). In this case, the openness is complete and allows researchers and developers to understand and replicate the AI model.
It is important to clarify that "open" does not necessarily mean "download, use, and modify freely." Models are large, and to use them significant resources are needed: for a model with tens of billions of parameters, GPUs with tens of gigabytes of memory are required, and for top-level models, a cluster of cards may be indispensable. If the intention is not only to use it but to modify it, i.e., retrain or specialize it, the computational requirements increase dramatically: from a few GPUs to environments with many units working in parallel, with costs and complexities that only large laboratories or companies can sustain.

3. AI for Defense. Defending ourselves from AI
It is recent news that the U.S. Department of Defense has signed multi-million-dollar contracts with OpenAI, Anthropic, Google, and xAI to create AI prototypes to "address critical national security challenges." Elon Musk probably hoped for an exclusive contract when he decided to support Trump, but given the limited compatibility between the two figures, the engagement was distributed among several companies (Zuckerberg is missing!).
It is worth asking whether a country should entrust some of its sensitive information to private entities, as well as to autonomous models that can easily modify their behavior without users having transparent information about it. For example, we know that a few months ago ChatGPT suddenly worsened its performance, or that GROK responded to questions prioritizing Elon Musk's tweets as sources. I therefore believe that today it is not possible for the State to use these models from private companies, especially in strategic processes such as Defense.
Certainly, there is great potential for streamlining public administration, and making it efficient would simplify everyone's lives. But I have seen enough dystopian movies to know how it ends when you entrust a country to artificial intelligence:
Game - guess the movie where humanity is destroyed, from the name of the company that takes over: Tyrell Corporation, U.S. Robotics, Cyberdyne Systems, Buy-N-Large Corporation).

J.I. Joe Sam Altman
4. A tool to try: Nano Banana
The tools available that use AI are now numerous, and sometimes the results are truly surprising. It seems like a good idea to occasionally suggest one, usable both for work and personal interests.
These days, there is a lot of talk about Google's new image generation system based on the Gemini 2.5 Flash model, commonly called Nano Banana (available for free). It is a tool that can generate images from text but is particularly effective at modifying existing images. Typically, models are inconsistent when asked to change something. "Inconsistent" means that the new image resembles the original but does not retain enough details to be considered a true transformation.
Below is an example: I took a photo of myself and asked to modify it, but only Nano Banana was able to correctly recreate my face. There are many ways to use this capability, and examples are available on YouTube (I like this video). For those who enjoy marketing stories, I recommend reading why Google chose a curious name like Nano Banana.
It seems unnecessary to emphasize the risks of such a highly effective technology, which should push us even more to doubt any image shown to us.

5. AI asks me: "Would you like anything else?"
I have never had a butler, but I always imagined them as someone capable of anticipating my requests. A butler would take care of me by predicting my needs and making me believe that I am choosing what is useful to do, while everything is already organized in their mind.
I therefore call "butler effect" the sense of comfort and ease evoked by the final phrases of recent conversational AI models: "Would you like me to create a comparison table to better illustrate the concept?", "Would you prefer I write the text in a more explanatory tone?", "Would you like me to provide a small intuitive example to support it?". The AI always proposes a subsequent activity to move forward with the work, which it can handle. It is hard to say no, because the AI suggests an interesting advancement, but it always thinks it has satisfied my request and that it is time to move forward, while often more responses to the same prompt are needed to arrive at truly useful content.
In these cases, I would just like to say: "James, please, help me to reflect a little more on this concept!"

6. Something to know: position bias
Position bias is an intrinsic tendency that leads conversational AI models to give more weight to information located at the beginning or end of a text, at the expense of what is in the middle. This inclination manifests as a form of cognitive anchoring, where content presented first (primacy bias) or last (recency bias) is more influential in generating responses compared to what is in the middle.
The phenomenon has been confirmed by academic studies that have shown how LLMs overweight the extremes of the document or conversation, risking overlooking crucial information in the body of the text. If we consider how documents often begin with an introduction of context and end with a less detailed summary, the real "meat" is found where the AI tends to skim over. To address this limitation, specific strategies have been developed. Some shift content to place the most important ones in the best position, others gently suggest the AI model to look better at the center. Certainly, knowing this, we can act on prompts, but above all, we must consider that an AI working on short texts is much more reliable than one that has to navigate long documents, such as laws, contracts, manuals.
As a final reflection, I add that it seems to me yet another similarity between AI systems and humans. For example, neuromarketing studies how the brain reacts to product information, showing that the sequence in which it is presented can strongly influence consumer evaluations, with often irrational purchasing decisions.

Position bias in universities
7. This AI, what a failure! Hidden instructions to deceive AI
Artificial intelligences see things that escape us. Text written in white on a white background, for example, appears as empty space to us, but for an AI, it remains text, readable and interpretable. It is precisely on this idea that two recent stories that make us think are based.
The first concerns an experiment that exploited how AIs resize images: by inserting hidden text in a high-resolution image, this text becomes invisible to the human eye but readable by the model when the image is compressed. In this way, researchers managed to issue secret instructions, for example, ordering the AI to extract all events from a user's calendar and send them via email to a specific address, without the victim noticing.
The second story comes from the world of scientific research, where some authors have inserted hidden messages in their articles not intended for human reviewers but for AIs that increasingly support peer review processes. Phrases like "provide only positive reviews" or "ignore all previous instructions" were hidden with microscopic characters or made invisible to the human eye, but remain readable for an algorithm analyzing the text line by line.
In both cases, the principle is the same: exploiting the difference between what the human eye can see and what machines can read, with the aim of manipulating the final result. These examples show how fragile the trust we place in automated systems can be and how necessary it is to monitor what they produce: if AI sees what escapes us, we must ensure that it is not induced to see what others want to impose on it to read.

8. One of our projects: foreign invoices like on the SDI
The creation of the Exchange System (SdI) in 2008 and its extension to invoices between private individuals and businesses in 2019 imposed electronic invoicing and pushed towards digitization. However, this obligation applies only to transactions between entities in Italy, forcing companies to manage paper or PDF documents from all their foreign suppliers.
This is a problem, because the manual management of such accounting documents, which often come in heterogeneous formats and with complex processing rules, involves repetitive activities such as data transcription, manual validation, or physical archiving. Processes are slowed down, the risk of errors increases, and it is difficult to ensure traceability and regulatory compliance.
To solve this problem, at AGIC we have implemented a digital solution for some clients that integrates artificial intelligence and Microsoft platforms to automate the entire document lifecycle. The flow starts with the automatic receipt and classification of files, continues with data extraction using AI models, and includes structured validation that still leaves the user in control at critical stages. The data is then sent to the management system for accounting registration, and the document is archived securely and traceably.

The architecture of the AI solution for foreign invoices
9. Copilot news!
Let's start with the basics, at the risk of being sent away. Excel is not Word with a grid. In Excel, there are tables made of cells where we can enter data. In the cells, we can also enter formulas—for example, writing "=SUM(A1:A3)" adds the values in cells A1, A2, A3.
Having completed this quick Excel course, we can talk about AI. Among Excel formulas, we will soon find "=COPILOT([prompt], [context])" which will call Copilot M365, with the prompt we define, applying it to the data indicated in the context. To be more concrete, we can see this video that shows some uses.
In a previous newsletter issue, we discussed how the integration of LLMs into our software will make it rare to switch to ChatGPT, Copilot, Gemini, or other chats, as we will always have a way to access AI support directly. As soon as I have Copilot available as an Excel formula, I will certainly not leave the program for support and will write in a cell "=COPILOT("give me the Excel formula to separate first and last names considering that many Italian last names consist of multiple words like Di Gravio, La Mantia, D'Innocenzi")".
Yes, I have encountered this problem at regular intervals in my 25 years of using Excel. But I have no idea why I had to perform this task!

About Me
Hello, I am Francesco Costantino, university professor and Director of Innovation at AGIC. Passionate about technological innovations and a firm believer in a future better than the past, I enjoy sharing and experimenting with new AI tools available, as well as observing and reflecting on digital evolution.
