штучний інтелект

Artificial Intelligence: Advantages and Risks

reflections

We are in transit — between eras in many meanings. One of them is the emergence of artificial intelligence. Artificial intelligence is a powerful tool. It doesn’t think instead of us, but it can help us think broader, faster, deeper. Like with anything else, the problem is not the tool itself but how, for what, and with what intention we use it.

Once distance separated us — today what can separate us is artificial intelligence. In the sense that once we had much more limited access to information and communication, and now we struggle with its excess, fake news, and confusion. This is a time when knowledge devalues if one cannot distinguish truth from fake. This is a time when critical thinking, ethics, media literacy, and the inner compass become more important than technical skills. We are learning not only to interact with machines but also to interact with ourselves in a new way. Because the future is not in artificial intelligence, but in us.

Humans in the world of artificial intelligence: what is changing?

Our brain delegates routine tasks to machines, and instead intuition, emotional intelligence, the ability to maintain focus, and to endure ambivalence become sharpened. The ability to ask the right questions gains even greater importance than ever before. It is not what you know that matters, but what you are interested in and how you formulate meanings.

The need to memorize is disappearing — instead, the need for filtering, reflection, and editing grows. The human of the future is not a carrier of knowledge but a navigator in a multilayered reality. And that’s okay because knowing everything was never possible.

Fears around artificial intelligence

Like with any new thing, there are many fears around artificial intelligence — some of them quite justified. The most common concern relates to loss (including jobs due to automation, privacy due to collection and analysis of personal data, control over processes, and human skills if relying excessively on algorithm suggestions), dependency (for example, when people get used to AI decisions so much that they reduce their own ability to analyze or act independently), manipulation (including subtle influence on thoughts — through language, intonation, or emotionally charged wording — or even through a “flattering” manner of communication — which, by the way, can be changed in settings), disinformation (because AI can generate convincing but false texts), environmental impact (AI consumes a lot of electricity, especially when training large models), and also the potential use in military fields (autonomous weapons, mass surveillance, AI-controlled drones).

Personally, I am less worried about loss of control or skills, and more about the likely loss of transparency. Currently, ChatGPT can give advice or make selections based on available sources (user reviews, general trends, internet knowledge). This is convenient, although what most recommend is not necessarily the best. But in the future, this system might start favoring certain commercial interests or shape its answers influenced by advertising (even if it happens subtly — say, by mentioning “favored” partners more often). For me, that would mean a loss of transparency, and therefore the level of trust I currently have in ChatGPT (to the extent that one can have trust in AI at all).

Let’s dispel fears

Fear is a natural reaction to the unknown. But many concerns about AI arise not from experience, but from assumptions, rumors, or alarming news headlines. It’s worth looking at them soberly.

  1. Job loss
    Yes, some professions transform. But new ones appear. History has shown: technological development does not destroy employment — it changes it. Those who learn to interact with AI tools gain advantage. Knowledge + adaptability = new opportunities.
  2. Privacy
    This is indeed an important issue. But responsibility is not only on the technologies but also on us. We must learn settings, not share sensitive information unnecessarily, be cautious. As on the internet in general, safe behavior is the key to control.
  3. Manipulation and disinformation
    AI can generate false information — but it can also help recognize it. In responsible hands, it is a tool for fact-checking, not manipulation. Critical thinking remains key.
  4. Environmental impact
    Training large models really requires energy. But compared to other sectors, technologies are quickly moving towards energy efficiency. Even here, as users, we have a voice: we can support ethical initiatives, choose transparent services, ask questions.
  5. Intrusive advertising
    This is one of the subtlest threats. But its appearance is not fatal. Users have already learned to limit ads on social media, block pop-ups, enable incognito mode. In the future, we will develop rules for AI use that preserve autonomy of choice.
  6. Use in military purposes
    Yes, AI is already used for military purposes by many countries.
    It is sad to see, for example, in Gaza, when Israel uses AI to attack civilian areas, and the forces are, to put it mildly, uneven — or in the case of China, which uses AI to monitor and control its citizens and spy abroad. In the case of russia and Ukraine, it is clear who has what advantages: while russia has more resources, Ukraine shows more ingenuity and flexibility.

How to use artificial intelligence consciously

Artificial intelligence is not a magic wand or an all-seeing eye. It is a tool that gains meaning only through interaction with a person. And we decide what this experience will be — mechanical, detached, or on the contrary: humane, ethical, and conscious.

  1. Ask questions — both to yourself and to AI
    Do not accept answers blindly. Where does this information come from? Are there alternatives? Is this advertising or independent opinion? This approach trains critical thinking and protects against manipulation.
  2. Understand that AI does not “know,” it models
    AI has no consciousness, values, or experience. It generates answers based on large volumes of texts. Its power is speed and processing, not truth. It is like a search engine but more contextual — still needs to be verified.
  3. Take care of information hygiene
    Do not share personal or confidential information. Choose reliable platforms, learn about data retention policies. Our safety is the responsibility of both providers and ourselves.
  4. Do not delegate to AI what requires your voice
    Texts, appeals, decisions — in important matters, it is better if the personal intonation is heard. Let AI help with ideas, search, editing — but not replace our experience and position.
  5. Use AI as an assistant, not a replacement
    AI works well as a draft, translator, analyzer, generator of lists or advice. But it will not replace real emotion, intuition, or human empathy. In creativity, communication, life — we are the main ones.
    The best part is that this assistant can be trained and configured.

How human skills transform in various fields under AI influence

Education

Decline: Memorizing dates, formulas, definitions — because these are easy to find or generate. One-dimensional reproduction of the “correct answer.”

Development: Critical thinking — the ability to assess the reliability of sources, compare different narratives. Many people think AI actually causes atrophy of critical thinking. If used thoughtlessly as a substitute — yes, of course.
Formulating questions — a student who can ask unconventional questions will outpace one who just memorized material.

Example: In Sweden, teachers are already starting to assess not the “essay” itself, but the student’s ability to argue their point of view orally or in discussion, since written texts can be easily generated. In developed countries, the use of AI is seen as more logical than something to condemn. Whether a person worked with the text or not can be detected both from the text and from oral answers. I believe that written text in education will gradually move to the background.

Communication

Decline: Standardized letters, cold emails, even some forms of copywriting — AI handles these faster.

Development: Empathic, flexible communication — tone that matches the context, intercultural sensitivity, nuance. Identity in language — something that cannot be faked: intonation, style, cultural embedding.

Example: Recruiters are already tired of “perfect” cover letters written by GPT. They are now more impressed by informal, lively, emotional phrases that convey genuine motivation. Resumes in general are clearly nearing their end. Our personal identity becomes more important rather than being overshadowed by AI; the question is in what new forms this can be expressed to the employer.

Creativity

Decline: “Technical” creativity — template designs, basic plots, word-for-word rhymes.

Development: Creativity as interpretation — unexpected genre blends, meta-commentary, irony, emotional ambiguity. Editorial craftsmanship — not what to generate, but how to select, rethink, and give a unique form.

Example: Hollywood screenwriters already collaborate with AI as with a first draft, but it is the human who adds depth — subplots, emphasis placement, emotional arcs. AI cannot create a living character.

Everyday life

Decline: Handling everyday tasks (routes, recipes, travel planning) — AI takes on the role of assistant.

Development: Self-management — the ability not to “do everything,” but to choose: what to delegate, what to keep for yourself, how to maintain balance.

Silence as a skill — the ability to be in pause, not to fill space with content just because it is possible. Nowadays, creating content is becoming more frightening because you drown in its quantities. But no one wants to consume endless ChatGPT content; content must have something that ChatGPT cannot provide. For me personally, it is a challenge to finally learn storytelling — of all things, that is definitely not something ChatGPT has.

Example: People who train their attention (through meditation, journaling, reading complex texts) already cope better with “AI overdose.”

In summary

The future does not belong to technology, but to people who know how to think, discern truth, and stay true to themselves. Artificial intelligence is a challenge to our attention, ethics, and ability to keep an inner compass. In fact, AI pushes us to finally be ourselves and to sharpen flexible thinking and ethical abilities. A wonderful challenge, in my opinion.

And one more thing I really expect globally from AI — is freeing up time to be with nature, our bodies, and living people. But again — it depends on us, not on AI.

Підпишіться на оновлення

Loading

Залишити коментар

Your email address will not be published. Required fields are marked *

read new articles first

Loading