❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 22 janvier 2020SogetiLabs

The aRt of reconciling science and statistics (1/5)

Par Kamel Abid

We are living in the Information era, surrounded by smartphones, social networks, platforms, IoTs, and technologies of all kinds, with increasing volumes of data, shared with greater frequency and richness. This is a boom for scientists who see the mass of the raw material expanding, but also a challenge for statisticians. Indeed, the processing of the data requires more and more adaptation regarding software, algorithms and statistical methods. Working on the same case, scientists and statisticians could sometimes feel far from each other. But before we delve deeper into how the R statistical programming language could help these two to get closer, let’s first shape the cornerstones of my story.

Present in 20 countries, we, Sogeti, are an IT company, where Digital, Cyber, Cloud, and Testing are the main practices. And like the small “Gallic village”, we have here in Luxembourg a special population: statisticians.

The latter is more concerned with collecting, cleaning, arranging, analyzing and disseminating data than integrating cloud solutions or developing apps. There are 70 statisticians in Sogeti Luxembourg, of which are two great colleagues: Paul Majerus and Alexandre Poncin. Their main duties include selecting and preparing the best ingredients, using and sometimes creating the most useful utensils so it is possible to reveal all the richness and relevance of data.

In this task they are helped by an essential tool: R. Before adopting R, our colleagues used to spend months iterating the same tasks year after year: cleaning, sorting, arranging, and presenting. Meanwhile, scientists were trying to analyse and understand the pattern of the world, some of them neglecting involuntarily available statistical practices.

Some user-friendly statistics software were part of the problem by allowing poor statistic-based data analyses summarizing statistical reflection to a mindlessly button-click action, therefore actively participating in the reproducibility crisis that the scientific world has been facing for more than a decade.

So why is R an ideal framework for working with stats? As a simple but robust programming language, it facilitates the automation of procedures, supported by an active community. It is constantly enriched with new features and applications, heavily documented, and all of this in open source.

“Thanks to R, we are saving precious time by automating our procedures. This time is reinvested in deepening our analyses, proposing new dissemination tools – such as web applications built in R” says Paul about the possibilities of R.

“On the other hand, the scientists could find in R a tool quite simple to use but requiring the comprehension of all parts of the statistical process. By methodically constructing each stage of the statistical analysis, scientists are assured of a better understanding of their construction, their limitations, possible errors and they maintain complete control and transparency over the calculations and methods applied. Therefore, R is less error-prone and supports scientists in good statistical practices” explains Alexandre.

For the next 4 blog posts, Paul, Alexandre and I will describe and explain how R changes the game by

– Reducing data preparation from 6 months to one week

– Making it possible to inject easily images, videos, sounds or other kinds of data

– Supporting the Bayesian revolution, and

– Bringing an inalterable source of innovation

We’ll be right back. Stay tuned!

This blog has been co-authored by Alexandre Poncin and Paul Majerus.

Paul Majerus is a Data analyst – Statistician at Sogeti Luxembourg.

The post The aRt of reconciling science and statistics (1/5) appeared first on SogetiLabs.

À partir d’avant-hierSogetiLabs

Presentation Menno van Doorn – The Future is already here and it is Synthetic

Par Thijs Pepping

Menno is the Director of the Sogeti Research Institute for the Analysis of New Technology (VINT / SogetiLabs). In the next video he will share the latest inspirational insights and findings of his research institute.

About Menno van Doorn

Menno is the Director of the Sogeti Research Institute for the Analysis of New Technology (VINT). He mixes personal life experiences with the findings of the 19 years of research done at the VINT Research Institute. Menno has co-authored many books on the impact of new technology on business and society: In Pursuit of Digital Happiness, AI First, The UnOrganization, The App Effect, and many more. Menno received the Computable Award “Researcher of the Year” for research done in the field of open innovation and business transformation in October 2007. He is a member of the Advisory Board of the Telecom & Management School of Business in Paris and a member of the Coordinating Commission of the Social Media Research Centre Somere of the University of Twente (Netherlands). Menno is the host of a 1300+ business innovation community called “Social Strategy Talk” with events that are held in Amsterdam. He was born in Gouda (say cheese) and went to Erasmus University to specialize in the field of economic psychology. This formed the base of his human-centric view on new technology.

About the Utopia for Beginners event The Utopia for Beginners events were held in order to discuss pressing questions and problems in IT. With over 250 IT Executives we explored new ways to innovate in a purposeful way. The input came from 14 inspiring keynotes given by academics, thought leaders and business leaders and was used in several round table discussions to learn from peers in the same and different industries. We summarized these valuable insights in our report ‘Utopia for Executives’ which you can download here.

The post Presentation Menno van Doorn – The Future is already here and it is Synthetic appeared first on SogetiLabs.

IT Certifications – 3 ways employer and employee can benefit

Par Susan Thayer

The beginning of a new year is always a time for reflection and goal-setting. This is when we look back on the achievements we made in 2019 to see how we have learned and grown as a person. This is also when we look forward to 2020 and envision how we can continue to improve and add value. For many IT professionals, including myself, this includes working towards and maintaining certifications.

When I first started in IT having on the job experience was usually sufficient to prove you knew what you were talking about but that is no longer the case.  In fact, a 2019 report from Global Knowledge, said that 93% of IT decision-makers believe a certified team member adds value above and beyond the cost of the certification–up from 35% in 2008. Source

Why the drastic change in attitude in just over ten years? The answer is that technology is more critical than ever for businesses. Speed to market, performance load-time, ease of use, and uptime are all make or break issues in today’s competitive world. The cost of mistakes and delays in building and supporting technologies can result in unhappy customers that take their business elsewhere.

Certified employees can help mitigate the risk of technology delays in three key ways.

  1. Certification Validates IT Skills

The most common benefit of certification is that it validates a person’s IT skills. It proves without a doubt that they are familiar with the key elements of the technology or process. This reduces onboarding time for new hires. It also ensures better collaboration for team members because everyone is using the same terminology and knows the same techniques.

It’s usually easy to tell when I am working with people that have not taken any training or courses towards certification. Unfortunately, bringing non-certified team members up to speed slows the overall team down.

2. Certification Ensures Current Job Skills

Having an up-to-date certification ensures that an employee’s critical job skills are current. Virtually all certifications require some sort of regular maintenance or recertification effort so she or he is aware of the latest enhancements and capabilities. Companies benefit when their employees are able to apply the latest techniques and tools.

I can personally attest that I learn something new every time I study for a maintenance exam. This is the knowledge that could otherwise be missed resulting in using outdated techniques.

3. Certified People Make Great Employees

Lastly, because certifications take a great deal of time and effort to earn, it demonstrates that she or he has initiative and a growth mindset. These are key characteristics that all managers want in their team members.

I find that certified IT folks generally never settle for the status quo. Just like they strive to improve themselves they are constantly pushing new and innovative ideas to improve the programs they are building.

As you can see IT Certifications benefit both the employer and the employee.

When I think about certifications and the value they bring I cannot help but think about something Thomas Jefferson once wrote: “knowledge is power, knowledge is safety and knowledge is happiness”. Source Certifications give the gift of knowledge to its holders and what better way to plan for 2020 than to strive towards power, safety, and happiness?

Cheers to 2020!

The post IT Certifications – 3 ways employer and employee can benefit appeared first on SogetiLabs.

Ultimate List of DevOps Tools

Par Ankur Jain

“Develop System Not Software”

DevOps is one of the biggest buzzwords in the world of technology in recent times as It offers a massive amount of benefits to the organization to shorten their software development life cycle.

What is DevOps?

There is no single definition or right answer for the question “What is DevOps”?

DevOps is not a tool, technology, or any framework; it is more a philosophy and a concept. It is a set of practices that combines software development (Dev) and IT operations (Ops), which helps to shorten the systems development life cycle and provide continuous integration and delivery with high software quality.

If you are a beginner, then check out this introduction post or take this online course – Docker for an absolute beginner.

DevOps Benefits

  • Improved collaboration and communication
  • Faster software or product delivery
  • Continuous cost reduction
  • Improved process
  • Faster resolution of issues

In the DevOps world, there is no single magical tool that fits all the needs. It is about choosing the right tool that fits an organization’s needs. Let’s find out about them.

DevOps Tools

Planning & Collaboration


JIRA is one of the popular project management tool developed by Atlassian used for issue, bug and project tracking. It allows the user to track the project and issue status. It can easily be integrated with other Atlassian products like Bitbucket in addition to other DevOps tools like Jenkins.


Slack is a freemium cloud-based collaboration tool that allows team communication and collaboration in one place. This tool can also be used to share documents and other information among the team members. This can also be easily integrated with other tools like GIT, Jenkins, JIRA, etc.


Zoom is a web conferencing and instant screen sharing platform. You can get your team to join through audio or video.

It doesn’t matter how big is your team, Zoom is capable of up to 1000 recipients into an online meeting.


Clarizen is a collaborative and project management software that helps in issue tracking, task management, and project portfolio management. It is easy to customize and has a user-friendly interactive User interface.


Asana is a mobile and web-based application designed to help teams organize, track, and manage their work in an effective and efficient manner. It is used to track team day to day tasks and support messaging and communication across the organization.

Source Code Management


SVN is a Centralized version and source control tool developed by Apache. It helps developers in maintaining different versions of source code and maintain a full history of all the changes.


Git is a distributed version control system which aimed at speed, data integrity, support for distributed, non-linear workflows. Other than source code management it can also be used to keep track of changes in any set of files.


Bitbucket is a web-based hosting platform developed by Atlassian. Bitbucket also offers an effective code review system and keep a track of every change in the code. It can easily be integrated with other DevOps tools like Jenkins, Bamboo.


GitHub is a code hosting platform designed for version control and collaboration. It offers all of the distributed version control and source code management (SCM) functionality of Git in addition to its features.

It offers access control and collaboration features like bug tracking, feature creation & Request, task management, etc for the project.



Apache Ant is an open-source java-based build and deploys tool. It supports the XML file format. It has several built-in tasks allowing us to compile, assemble, test and run Java applications.


Maven is a build automation tool majorly used for java projects. It contains an XML file that describes the software project being built, its dependencies on other external components and modules, the build sequence, directories, and other required plug-ins.


Grunt is a javascript command-line tool that helps to build applications and help developers to automate repetitive tasks like compilation, unit testing, code linting, and validation, etc. It is a good alternative for tools like Make or Ant.


Gradle is an open-source build automation system that builds upon the concepts of Apache Maven and Apache Ant. It supports Groovy proper programming language instead of the XML configuration file. It offers support for incremental builds by automatically determining which parts of the build are up to date.

Configuration Management


Puppet is an open-source configuration management tool used to configure, deploy and manage numerous servers. This tool supports the concept of infrastructure as code and is written in Ruby DSL. It also supports dynamic scale up and down of machines on a need basis.


Chef is an open-source configuration management tool developed by Opscode using Ruby to manage infrastructure on virtual or physical machines. It helps in managing complex infrastructure on the fly on virtual, physical and cloud machines as well.


Ansible is an open-source IT configuration management, software provisioning, Orchestration and application deployment tool. It is a simple yet powerful tool to automate simple and complex multi-tier IT applications.


SaltStack is open-source software written in python and uses the push model for executing the commands via SSH protocol. It offers support for both horizontal as well as vertical scaling. It supports YAML templates to write down any scripts.


Terraform is an open-source tool for building, changing, deploying and versioning infrastructure safely and efficiently. It is used to manage existing and popular service providers as well as custom in-house solutions. It helps in define infrastructure in config/code and will enable a user to rebuild/change and track changes to infrastructure in an easy way.


Vagrant is one of the popular tools for building and managing virtual machines (VM). It has an easy-to-use and configurable workflow that focuses on automation. It helps to reduce development environment setup time, increases production parity.

Continuous Integration


Jenkins is one of the most popular open-source DevOps tools to support continuous integration and delivery through DevOps. It allows continuous integration and continuous delivery of projects, regardless of the platform users are working on with the help of various build and deployment pipelines. Jenkins can be integrated with several testing and deployment tools.

Travis CI

Travis CI is a Cloud-hosted, distributed continuous integration platform used to build and test projects hosted at GitHub and Bitbucket. It is configured by adding a YAML file.

It can be tested for free for open-source projects and on a fee basis for a private project.


Bamboo is one of the popular products developed by Atlassian to support seamless continuous integration. Its most of the functionality is prebuilt which means we do not need to download different plugins like Jenkins. It also supports seamless integration with other Atlassian products like JIRA and Bitbucket.


Hudson is free software written in JAVA and runs in a servlet container like GlassFish and Apache Tomcat. It provides the capability to trigger your automation suite with any changes in the corresponding Source management system like GIT, SVN, etc. It also provides support for all maven and Java base projects as well.


TeamCity is a server-based continuous integration and builds a management tool developed by JetBrains.It has a simple and easy to use User interface (UI) and provides build progress, drill down build information and history information for all the configurations and projects.


CircleCI is available in the form of cloud-based as well as on-premise solutions for continuous integration. It is easy and fast to start and support lightweight easily readable YAML configurations.

Continuous Security


Integrate Snyk in the development lifecycle to find and fix open source security vulnerabilities, automatically. It supports JS, .Net, PHP, NPM, jQuery, Python, Java, etc. and can be integrated at coding, code management, CI/CI, container, and deployment.

Snyk got the largest open source vulnerabilities database.


Netsparker automatically scans your application for security flaws and provide actionable classified reports so you can take action based on priority. A DevOps security scenario would be to examine the new commit and report the bug directly into the tracking system like Jira or GitHub and rescan once fixed by the developer. You see it integrate at every stage of SDLC.



Selenium is the most popular and open source testing tool. It supports test automation across various browsers and operating machines. It can easily be integrated with test management tools like ALM, JIRA and also with other DevOps tools like Jenkins, Teamcity, Bamboo, etc.


TestNG is an Open source Testing framework which is designed and inspired from Junit and Nunit. It can easily be integrated with selenium web-driver to configure and run automation test scripts. It also generates different test reports like HTML or XSLT.


JUnit is an open-source unit testing framework used by developers to write and run repeatable test cases. It supports different test annotations using which any developer can write a seamless unit test case. It can easily be integrated with other DevOps tools like Jenkins, GIT, etc.



Nagios is an open-source and one of the most popular tools for continuous monitoring. Nagios help to monitor systems, applications, service and business Process in a DevOps culture. It alerts users when anything goes wrong with the infrastructure and alerts them as well when the issue has been resolved.


Grafana is an open-source analytics platform to monitor all the metrics from infrastructure, applications, and hardware devices. You can visualize the data, create and share a dashboard, set up alerts, and collaborate. You can pull data from more than 30 sources, including Prometheus, InfluxDB, Elasticsearch, AWS CloudWatch, etc.


Sensu is an open-source monitoring tool written in Ruby that help in monitoring servers, services, application, cloud infrastructure simply and effectively. It is easy to scale so that we can easily monitor thousands of servers.

New Relic

New Relic is a software analytics product for application performance monitoring (APM) which delivers real-time and trending data about web application performance and the level of satisfaction that end-users experience with it. It supports an end to end transaction tracing and display them with a variety of color-coded charts, graphs, and reports


Datadog is an agent-based server metric tool. It supports integration with different web servers, apps, and cloud servers. Its dashboard service provides various graphs about real-time monitoring across the infrastructure.


ELK is a collection of three open-source products —Elasticsearch, Logstash, and Kibana which are all developed, managed, maintained by the company Elastic. It allows users to take to data from any source, in any format, and then search, analyze, and visualize that data in real-time.

Cloud Hosting


AWS is a web hosting platform created by Amazon that offers flexible, reliable, scalable, easy-to-use, scalable and cost-effective solutions. using this cloud platform we don’t need to worry about setting up IT infrastructure which usually takes a reasonable amount of time in setting up.


Azure is a cloud computing platform, designed by Microsoft to build, deploy, test and manage applications and services through a global network of its data centers. The services provided by Microsoft Azure are in the form of PaaS (Platform as a service) and IaaS (Infrastructure as a service).


Google Cloud is a complete set of public cloud hosting and computing services offered by supports a wide range of services for computing, storage and for application development that uses Google Hardware.



Docker is a tool to create, deploy, and run applications by using containers. This container allows the developer to package an application with all of the components and sub-components it needs, such as libraries and other dependencies, and ship it all out in the form of a single package. This work on the concept of the ship and run anywhere.


Kubernetes is an open-source container-orchestration system originally designed by Google and is now it is maintained by the Cloud Native Computing Foundation. It is used for automating application deployment, scaling, and management. It works with other container tools as well including Docker.


I hope the above-listed tools help you with your DevOps journey.

The post Ultimate List of DevOps Tools appeared first on SogetiLabs.

It’s 2020; Time to Finally Implement the Modern Work Week Globally

Par Christopher Kozanecki

Do you know about the age-old dream about how the 21st century was supposed to bring us a life of leisure compared to when various work rules and technologies were established?  For instance, did you know the 8-hour workday, which is still the standard for most companies, was originally proposed in the late 1800s, but it didn’t become law for most countries until the early 1900s – nearly 100 years ago!

What about the first laptop, which was invented in 1986?  It wasn’t something too impressive by modern standards, but it is still 30+ years that we have had the ability to do mobile computing once you factor in the World Wide Web timeline.  Even the iPhone is 13 years old, which is nearing relic status with how fast technology moves.

There is precedent for improving both productivity and worker happiness with shorter weeks too.  In 1913, Henry Ford used technology to make his cars faster and more reliable.  To reduce turnover with this change, in 1914, he decided to increase the wages (doubled pay for many workers) and reduce working hours at the same time.  Employee happiness skyrocketed, and he had lines around the block of people wanting jobs.  This foresight lead to the emergence of the middle class.

I propose that as the younger population goes into the workforce, we consider the dreams of generations before us and use technology to help us establish those dreams.  With technology like Microsoft Teams & O365, we can collaborate anywhere even using video chat to ensure that the team feels connected to each other.  We can use tools like Azure DevOps to build and deploy solutions at any time of day without the worry that our production team is on holiday.

Recently, Microsoft decided to try their “Work-Life Choice Challenge” where they closed all their offices in Japan on Fridays for the month of August – without increasing the hours on the other days of the week, and although their work week went down by 20%, their productivity went up by 40%.

Perpetual Guardian in New Zealand recently tried a similar experiment, reducing hours to 32 but continued to pay full time.  They found that employees were 24% more engaged in work-life balance and had the same productivity with reduced hours.

And the point I am trying to make is that they found using work from home made these even more impressive.  Ctrip recently did a 2-year study with 500 employees that met the criteria for working at home – private workspace, broadband internet – and they found work from home to be even more beneficial.  They found employees who worked from home had a productivity boost of nearly 20% over their non-work from home counterparts, while reducing attrition by 50%. The company also saved nearly $2000 per employee, per year due to reduced space requirements.

I propose companies should consider these results, and implement the following work environment:

  • 1 day per week at the office – Staggered across employees 25% per day Mon-Thurs
  • 3 days per week at home
  • 3 days for the weekend.
  • Using technology to automate, connect, and engage employees

I for one would volunteer not only for this type of work week and would love to help companies implement their O365 or DevOps transformations so they too can spread this to their employees. Here are some ways technologies enable the modern work week:

  • Cloud-Native: This isn’t technically new, but at this point are you using enough? With the scale of services like Azure, you can be sure that your employees have what they need while being protected from hackers.  And because cloud allows direct connection, you don’t need to worry about VPN hardware or data centers that need installation and setup: Cloud is designed to be safe, secure, and accessible remotely. 
  • Collaboration Tools:  Using tools like Slack, you can build out the ability for teams to work together through chat applications and document sharing.  You could also go further though with subscriptions built around common collaboration.  Teams is based on Skype, SharePoint, and OneDrive. Using comprehensive tools like this allows for meeting schedules, teleconferencing/telephony with video, robust file sharing including simultaneous editing of documents, and even the ability to connect to other tools for extensibility. New companies like zoom are working to make this even better
  •  Software-as-a-Service / Platform-as-a-Service:  Recently there has been a massive move in the industry to make software that is useful without the need to maintain it.  This has enabled us to move away from tools like Project to more robust tools like Workday or Trello; Software collaboration through Azure-DevOps or GitHub; Customer tools like Salesforce.  This will continue to push the boundaries of what needs us to sit in the office, and what can be done over coffee.
  • Cell-Phones, Tablets, IoT & 5G:  We cannot forget about those computers people carry around with them.  With GPS, Calling, Nationwide Data, and installable applications it’s a fact of life that we are connected all the time.  This is a huge benefit for a company because it means someone can work when it suits them, where it suits them, instead of having to commute to the office and sit in the same place all the time.  But with the trends towards IoT and 5G we will have the ability to have extremely high speed wireless powering physical representations of remote objects.  What is exciting about that is that it means more jobs are able to work remotely – A doctor could use robots from her mountain vista instead of having to end a vacation for an emergency surgery.

What technologies do you see contributing to this new reality of a modern work week, and how would you implement it as the manager of your teams?

The post It’s 2020; Time to Finally Implement the Modern Work Week Globally appeared first on SogetiLabs.

Presentation Stine Jensen – Notes on the Synthetic

Par Thijs Pepping

Stine Jensen is a philosopher, writer and program maker for television shows. In the next video she will break down ‘Synthetic’ and tell you all about the meaning of the synthetic times we live in.

About Stine Jensen

Stine Jensen studied literature and philosophy in Groningen, the Netherlands, after which she continued at the University of Maastricht and obtained her PhD on Why Women Love Apes. Translations of the book appeared in Chinese and French. When the silverback gorilla Bokito broke out in a Dutch zoo on 18 May 2007 and seriously injured a regular visitor, Stine gained national fame due to her previous research. In the aftermath of the affair, Stine Jensen turned out to be a welcome guest in radio and television programs to share her unique perspective on the world.

About the Utopia for Beginners event

The Utopia for Beginners events were held in order to discuss pressing questions and problems in IT. With over 250 IT Executives we explored new ways to innovate in a purposeful way. The input came from 14 inspiring keynotes given by academics, thought leaders and business leaders and was used in several round table discussions to learn from peers in the same and different industries. We summarized these valuable insights in our report ‘Utopia for Executives’ which you can download here.

The post Presentation Stine Jensen – Notes on the Synthetic appeared first on SogetiLabs.

Happy New Year and thanks for all the fish

Par Tuomas Peurakoski

Welcome to the FUTURE! It is the year 2020, the year of cyberpunk. The hoverboards of 2015 are now only a vague memory, something that “used to be” and Blade Runner happened in November of 2019. So clearly we are now in the utopian times of technology.

To be fair: we are. I was visiting the Helsinki zoo Korkeasaari on the first day of 2020 with my daughter and we were watching some exotic fishes and corals that swam in the aquarium. Two elementary school-age kids came to watch them with their mother and after taking a close look at them the other one said to his mother:

“Mom, those look computer-generated”.


It is interesting to think that during my decades I’ve watched the impossible to happen. First, we had the phones that you could take with you! They weighed a ton at first. Then they got smaller. Then we invented the GSM technology and you could actually hear what the other people were saying on the phone.

Then we started to use a little bit of internet on the phones and finally, we’re now in wireless environments streaming video and audio and we have everything on hand right now. We are actually augmenting ourselves in the cyberpunk fashion. This really is the future!

But my daughter and those kids I talked about previously have been born in a world where all of this is already invented. And while we were really busy exploring situations on how to create everything virtual and scalable and robotized we should also focus on making sure that while we can generate anything on a computer, we probably shouldn’t have only that.

Because in the end, the kids tell it like it is: if their first connection to exotic fishes is a computer-generated one then the real fish resembles the virtual fish to them, not the other way around.


The beginnings of new years usually mean clean slates and promise to do better. I think it’s tenfold every time we change a decade.

So while we strive to create the world more and more into a technological and virtual utopia, let’s try to think also how we can make sure that we have the real fishes swimming around.

Even if they look computer-generated.

Ps. If you are one of those people who a couple of paragraphs back said out loud “well, the new decade starts only in 2021” I only want to use xkcd’s argument on the matter:

MC Hammer’s U Can’t Touch this (1990) was featured in I Love The ’90s series, not 80’s.

The post Happy New Year and thanks for all the fish appeared first on SogetiLabs.

IFML – Future-proof Time2Market coding strategy

Par Daniel Pardhe

What is IFML?

IFML is a very powerful concept and if implemented correctly can yield enormous ROI, while simplifying one of the most time-consuming aspects of code development, maintenance and migration, namely UI and navigation.

IFML or Interaction Flow Modeling Specification was formally adapted by OMG in 2014 and then published in March 2015. Its primary purpose is to capture UI and navigation flow of applications in a Visual model which can then be translated using code-generators of any language(s) for which IFML engines are available. Originally known as the WebML, it is now IFML because it is no longer limited to web development but also used for mobile apps.

Thanks to IFML, it is now possible to visually model UI and navigation flows which can then be translated into the programming language of your choice. This opens enormous possibilities in rapid software development, especially if used in conjunction with UML and BPM specifications. How?

First, the visual model “is” the code and needs little documentation.

Since the model is visual, it clearly articulates the flows of the interactions and is a documentation by itself, easily understandable by all stakeholders. With expert modelers, the models can be created during the early stages of development even as business analysts are still gathering requirements. Remember, the visual model “is” the code. Agile development suddenly takes on a new form.

Second, the code is future-proof.

What do I mean by that? The code is visual and can be input to any IFML- supporting engine to generate the source code in the language of your choice. Yes, it is possible to write engines that will read IFML and generate code in Objective C, Java, C#, JS or any other language. If one language gets obsolete, migration to another is just a matter of auto-code-generation with the latest engines for the new language.

Third, the code is consistent across all applications

Imagine not having to implement best practices in code development across the globally distributed development teams. This auto-generated code will already be engineered to follow the best practices in coding, security, accessibility and other areas. 

I think, with a plethora of coding languages, platforms and cross-platform development options, visual modeling may just be the common glue that holds other codebases together and future-proof investments in development and migration. Stay tuned for deeper insights on IFML.

The post IFML – Future-proof Time2Market coding strategy appeared first on SogetiLabs.

Whitepaper: Product Development – getting it right the first time

Par Vijayan Ganapathy

Developing a product is more about foresight and intuition as much as not going down the wrong alley. A great product requires as much detail orientation as dotting the i’s and crossing the t’s. Most product development efforts fail while turning the concept into a product by falling into one or more of the fundamental traps which can be avoidable if we can see the forest for the trees.

The Fundamental Principle of product development is to “Uncover the surprises as soon as possible”. Surprises lead to change, which is always better at the beginning of the development cycle than it is anytime later. The cost of making a change rises exponentially as time passes.

This whitepaper stresses on establishing the groundwork for product development in a principled manner. Download now to learn more about the specific pitfalls that undermine product development, and what we can do to mitigate them.

The post Whitepaper: Product Development – getting it right the first time appeared first on SogetiLabs.

Technology Labs podcast: Episode 5 – World Quality Report 2019

Par Daniel Laskewitz

During this episode, we have Andrew Fullen as our guest. Andrew Fullen is Head of Technology and Innovation at Sogeti UK. We talk with him about the World Quality Report and other tech items.

World Quality Report 2019 | Facebook’s only fact-checking service in the Netherlands just quit | Star Wars immersive: Galaxy’s Edge

The post Technology Labs podcast: Episode 5 – World Quality Report 2019 appeared first on SogetiLabs.

Presentation Nell Watson – Augmenting the human heart & soul

Par Thijs Pepping

Nell Watson is an engineer, educator, and tech philosopher who grew up in Northern Ireland. In the next video she will share her unique vision on Sogeti’s Executive Summit ‘Utopia for Beginners’ and tell you how we go into an era in which we will augment the human heart and soul.

About Nell Watson

Nell Watson lectures globally on Machine Intelligence, AI philosophy, Human-Machine relations, and the Future of Human Society, serving on the Faculty of AI & Robotics at Singularity University. She is also Co-Founder of EthicsNet, a non-profit, building a movement of people who are committed to help machines understand humans better. This community acts as role models and guardians to raise kind AI, by providing virtual experiences, and collecting examples of pro-social practices.

About the Utopia for Beginners event

The Utopia for Beginners events were held in order to discuss pressing questions and problems in IT. With over 250 IT Executives we explored new ways to innovate in a purposeful way. The input came from 14 inspiring keynotes given by academics, thought leaders and business leaders and was used in several round table discussions to learn from peers in the same and different industries. We summarized these valuable insights in our report ‘Utopia for Executives’ which you can download here.

The post Presentation Nell Watson – Augmenting the human heart & soul appeared first on SogetiLabs.

If you automate a mess, you get an automated mess

Par Toni Kraja

Software testing: A check between dream and reality

Developing a smoothly running, efficient, bug-free software is every engineer’s dream. But to convert this dream into a reality, one has to go through the rigorous exercise called software testing or Software Quality Assurance. Software testing is a means of finding out whether your application is functioning the way you envisioned it to work. Software testing is broadly divided into manual testing and test automation. While manual testing needs the testers to go through various features, test automation uses an automation framework to check various components of the software under test.

Test automation: Converting hours to minutes

Automation is no more the new kid in the block, but it surely is a star kid who is catching everyone’s attention. Why? Because it saves a lot of time and effort, and test automation is no exception. Test automation includes using automation tools to set up testing parameters that can act as preconditions to check the seamless functionality, carrying out the tests, and verifying the actual results against the expected results. Through automation tools, several testing processes can be made to run in parallel, sequentially, as well as in a regularized pattern. It reduces the risks of human faults during test execution and generates more reliable test execution timeframes. 

Requirement analysis: A groundwork for test automation

Right after you decide on the “why” of software development, it is imperative to focus on the “what” before you proceed to the “how” of development. While deciding on the architecture and framework of the software, putting a plan for the testing architecture and framework is necessary. This is where requirement analysis comes in place. A test automation solution should completely be considered as another type of software, so any rules or guidelines for software architects should also apply for test automation architects. While pondering on the requirements for test automation, the following points must be kept in mind:

  • The phase(s) of the test that you intend to automate, design, execution or generate
  • The level of the test you intend to automate, such as the component level, system level, integration level, or acceptance level – yes, acceptance tests can be automated too!
  • The type of test to be automated: For example, functionality, interoperability, conformance, or any kind of quality attributes.
  • The software product or product line under test that you want to automate.
  • The testing role(s) you want to automate – executor, manager, architect, or analyst.
  • The software-under-testing (SUT) technologies that you want to automate.

A seven-lens analysis of the test automation requirements can help create a full-proof automation platform:

  1. Breaking the silo: It is always better to have a centralized approach towards test automation, not just among engineers, but also among the different operational units. It saves time and effort to have various centralized structures and ready-to-use resources.
  2. Using it your way: Having a platform that allows customization during the runtime allows wide usage. It is important to not choose a test automation tool first and then adapt around it. A modular architecture with customizable definition, execution, and adaptation layers will most likely bring a more efficient and sustainable approach.
  3. Versatility is the key: The ability to automate across several platforms and multiple components, within the test, is something that can be on the cards.
  4. Putting the process in place: It is a good idea to document the processes, even the non-technical logics for the QA engineers to have a clear picture. Test automation is also a kind of software, and therefore, the rules for documented specifications, manuals, and routines also apply to it.
  5. Uniformity is the key: Maintaining uniformity in reporting the stages of software development helps the QA engineers to analyze better. The same is true for the various stages in the test automation process, which should also follow a software development life cycle.
  6. Cost-cutting is imperative: The primary analysis before automating a test is to see how much operational cost can be reduced by automating a process. This allows the return-of-investment to be calculated in a realistic way.
  7. Monitoring the test progress: A user-friendly dashboard that tracks the progress of the test comes handy in executing the test and also customizing it. This is also applicable for test automation. The dashboard could provide the option to open and track bugs, issues, or needs for improvements.

Escaping the “technical debt”

Requirement analysis in test automation is necessary to save yourself from technical debt. In an agile environment it refers to the gap between the ideal situation and the current situation; usually risen out of development to meet certain dimensions of the application. Technical debts do have an incredible impact on test automation, especially after SUT change. Through requirement analysis, you can build a test automation framework that will help reduce such hindrances.

The post If you automate a mess, you get an automated mess appeared first on SogetiLabs.

Top 5 SogetiLabs posts from December 2019

Par Sogeti Labs

Take a look at our most read and shared blog posts from December 2019

Predictions 2020

The blogs and websites and social media are full of experts telling us what the predictions for the next year will be. Not wanting to be left out, I did some deep thinking (I even thought of using an AI to help) and put together a list of predictions for the next twelve months. Take a look here.

Service Virtualisation: The Future of it

Reduced time to market. Lesser cost. High-quality delivery. Learn about the many more the advantage of service virtualisation deployment in the current IT trends in this blog.

Architecture in this new world we live in – a DYA Whitepaper by Sogeti

This whitepaper gives a holistic view of Architecture in this new world we live in and also defines the elements that influence our new normal. Download now to learn how organizations can stay relevant in this new world!

Dev(QA)Ops: The Mount Olympus of the new software delivery civilization

DevOps is the Mount Olympus of the new software delivery civilization. And in the Mount Olympus of Dev(QA)Ops we have also the Twelve Olympians. Let’s take a look.

My topics for 2020

Edwin van der Thiel shares 5 exciting topics he’ll be covering in the coming year in this blog.

The post Top 5 SogetiLabs posts from December 2019 appeared first on SogetiLabs.

2020: Keep on learning

Par Peter Rombouts

As the new year starts, many of us have New Year’s resolutions, and many of those will eventually perish within a month or two.

New Year, New Technology

I don’t have any resolutions. The only thing I try to do each year is to learn a new technique or language. Please note that new means new to me and not necessarily a brand new technique.

Why? In my day-job I focus on designing cloud native systems and architecture, and most of my ‘programming’ is done in Visio and PowerPoint. As my roots are in Software Engineering, I keep myself up-to-date by learning new languages and techniques.

For the upcoming year I’ve already made my choice. I started out with the following short-list.

  1. Scala
  2. Rust
  3. Go

Creating the short-list

The reason I chose these techniques is not random. In my work as an external examiner for the University of Applied Sciences Avans and Fontys in the Netherlands, I see the work of many students each year. They inspire me to look at specific techniques that normally do not cross my path. In my day-to-day work, the most used languages are C#, TypeScript, Java, and JavaScript languages and frameworks like Angular.

1. Scala

Scala is a general-purpose programming language providing support for functional programming and a strong static type system. Designed to be concise, many of Scala’s design decisions aimed to address criticisms of Java.


Functional programming is something that I do not see often in my day-to-day job, so I was intrigued by the capabilities of this language. Also, some very fast and popular software is written in Scala. Examples are: Apache Kafka, Apache Spark and Akka.

2. Rust

Rust is a multi-paradigm system programming language focused on safety, especially safe concurrency. Rust is syntactically similar to C++, but is designed to provide better memory safety while maintaining high performance.


Originally invented by Mozilla and used within Firefox and Dropbox. Rust has been the “most loved programming language” in the Stack Overflow Developer Survey every year since 2016, so that drew my attention.

3. Go

Go, also known as Golang, is a statically typed, compiled programming language designed at Google by Robert Griesemer, Rob Pike, and Ken Thompson. Go is syntactically similar to C, but with memory safety, garbage collection, structural typing, and CSP-style concurrency.


Go has been around for quite some time, and has an impressive list of applications that were built with the language. Kubernetes, OpenShift, Docker and the list goes on.

Choosing my 2020 technique

For an internal project, we were looking for a tool to provides us programmability against a multitude of APIs. Instead of grabbing an off-the-shelve product, I investigated tools and frameworks that could help us build an MVP, fast and reliably. This eventually made me look into Terraform Custom Providers.

In Terraform, a Provider is the logical abstraction of an upstream API. This guide details how to build a custom provider for Terraform.

Terraform will let you wrap any API, so it will enable us to wrap our ITSM tooling, Monitoring tooling and what have you not.

As Terraform and the custom providers are written in Go, that was my main reason to dive into this language. I’ve created a Github repo with an example custom provider, find it here on Github.

Keep on learning

I’m curious if any of you are also keen on ‘staying relevant’ and want to keep up with new techniques and languages. And, what techniques and languages do you try out and investigate? Please feel free to let me know, and contact me on LinkedIn or Twitter!

The post 2020: Keep on learning appeared first on SogetiLabs.

Presentation Shravan Thaploo – Augmenting our Brains

Par Thijs Pepping

Shravan Thaploo is a young research scientist at Harvard Medical School specializing in neuroscience. He has a unique perspective on the development of neuro data and neuro consumer devices. In the next video he will share his vision on Sogeti’s Executive Summit ‘Utopia for Beginners’.

About Shravan Thaploo

Having traveled both across the world and across the United States, Shravan Thaploo has worked in various different research labs including a social cognitive neuroscience lab as well as a behavioral neuroscience lab. He uses fMRI technology, simple brain machine interfaces, and other techniques to understand how neuroscience can affect society and change the way people interact with one another. 

About the Utopia for Beginners event

The Utopia for Beginners events were held in order to discuss pressing questions and problems in IT. With over 250 IT Executives we explored new ways to innovate in a purposeful way. The input came from 14 inspiring keynotes given by academics, thought leaders and business leaders and was used in several round table discussions to learn from peers in the same and different industries. We summarized these valuable insights in our report ‘Utopia for Executives’ which you can download here.

The post Presentation Shravan Thaploo – Augmenting our Brains appeared first on SogetiLabs.

Scully: The Angular Static Site Generator

Par Kim de Boer Schenk

If you are going to work on a project which requires fast performance on mobile and/or slow connections – or if you just like performant web applications – and you are going to be able to work with Angular 8 or above, you might want to take a look at Scully, the Static Site Generator for Angular.

Normally a Hello World application in Angular already has a size of around 300KB. It is slow on connections like 3G or lower and won’t give the performance people expect these days. This is caused by the overhead that Angular carries with it as standard. Because Angular is a JavaScript framework, it will only work in browsers with JavaScript turned on. 

As a result, users may drop out and you will have fewer visitors to your website.

Scully is the Glu

Scully can take your Angular application (and soon parts of your Angular application) and create a static version of it. Scully pre-renders each page in your application to plain HTML and CSS and will therefore also work when JavaScript is disabled in the browser. Scully uses machine learning to find and visit all of the routes in an Angular project, after which it will render all the views and save them as plain HTML files. Scully will essentially turn your Angular application into a JAMStack.  The Hello World application from around 300KB that we talk about will become about 20KB when we apply Scully to it. So it is 15 times smaller than before and therefore it will load faster. They also created a plugin system to incorporate Route Plugins and Data Transform Plugins.

Scully turns Angular apps into wicked-fast static sites

At the time of writing, it only supports Angular 9. The official version will support Angular 8 and higher. Now with Angular 9 coming it will be available also on Angular 8. 

Scully is currently in its alpha version and available here.

The beta version is expected by the end of February 2020. Watch the introduction video with examples of Scully here.

You can read more about JAMStack here or download the free O’Reilly book, called ‘Modern Web Development on the JAMStack’

The post Scully: The Angular Static Site Generator appeared first on SogetiLabs.

Rest Assured

Par Alistair Gerrard

There have been some interesting interpretations of what “going agile” mean, and my two favourites are “no more documentation” and “no more testers”. It is the latter I intend to explore in this article but let’s start with the former:

The “No documentation” myth is easier to tackle as it’s based on a genuine criticism of the tomes of documentation produced by waterfall projects. Tomes are rarely read or updated once reviewed and approved. It’s fairly clear to even the most casual observer that the activity of creating this documentation is severely disproportionate to the usefulness of the document delivered.

The (a key?) difference between Waterfall and Agile approaches is not that one has documentation and the other doesn’t. In Waterfall, the requirements documents were written upfront (as tomes). In Agile, the requirements are captured in feature files and live alongside the code as a living document and a constantly updated knowledge repository with each subsequent iteration.

Without requirements being captured in this way Agile would, as could happen in Waterfall, end up creating systems where there was a degree of uncertainty as to what the system is supposed to do. What is actually the case is that the documentation created under an Agile methodology is designed to be more efficient and more effective, with a focus on what is needed and the removal of anything which is superfluous.

And so we can move on to the claim of “No more testers”.

Now there is a degree of truth to this if we consider a tester’s role to remain the same over time, in particular for that to be a role where testing is done manually and at the end of, or at least overlapping with, development activity. In this sense, there is a genuine decline in demand for manual testers.

But saying there will be “no more testers” is a disingenuous summary of this shift as, by logical extension, it implies there will be no testing. Personally, I can imagine less than a handful of scenarios where this might be acceptable but I’d still suggest that even in those scenarios it is wholly inadvisable!

What is supposed to happen is that testing happens earlier, and swathes of it are automated by developers. What we need to remember is that some of this will be unit testing, and it’s not beyond my memory to recall times when some developers did no unit testing whatsoever … so some of this new testing is actually adjusting for a deficiency in old practices!!!

This leaves us with the other part of the equation to contend with, where developers are also automating some system and system integration testing, removing this from the realm of manual testers. I am in agreement that this will reduce the need for manual testers.

But what the sweeping statement of “no more testers” fails to address is the need for quality (or test) professionals are still required to assure the tests. Whilst it’s relatively straightforward to write a piece of code and write some accompanying tests, this is not a cast iron guarantee of quality.

The reality is that testing is not evaporating but it is changing. The Quality Control aspect is being automated, and as much as possible is being shifted left. What remains is Quality Assurance, which takes the expertise of testers and applies it to make sure the correct checks are done and the most appropriate time to give confidence of quality deliverables time-after-time.

This is important and understood on a small scale. Within Sogeti have also attained great success at delivering quality assurance at scale for large organizations.

The post Rest Assured appeared first on SogetiLabs.

Magic is in the air – Gartner’s Magic Quadrant for Software Test Automation

Par Marco Venzelaar

Last week, Gartner published the always eagerly awaited Magic Quadrant for Software Test Automation. And over the last few days, it has been circulated and quoted a lot of times. But what does this all mean? Are the non-Leaders tools not worth it? Do we need to follow the leader(s)? Are there only 10 automation tool vendors?

Let’s try to make a bit more sense of this. In the report, Gartner clearly states why they used only 10 vendors and all the criteria it used to include them, which is fine but the interesting bit is in the “Market Overview” section where you can find the following quote: “Open-source and cloud solutions have strongly disrupted the market from a pricing perspective.”. This is great as open-source (being free) has, of course, a huge advantage in the value for money, but that is not the whole story. As in the same paragraph, there is a warning: “They can carry a high cost related to building and maintaining a working and the productivity achieved may not always be at the desired levels.”. The latter is not to be discounted as those are too often the hidden costs of using Open-Source tools.

Seeing Tricentis, Eggplant and SmartBear in the “Leaders” quadrant is not surprising as all three have been working on expanding their tools with new technologies and capabilities in 2019. What is surprising is that Micro Focus has dropped out of the “Leaders” quadrant. Having looked back in history it seems that the only reason Micro Focus was in that quadrant (for Test Automation tooling) is likely due to the purchase of HPE in 2017 who had then leading test automation software. This brings up an interesting thought… Where did today’s leaders come from and where have the previous leaders gone? I have taken a look at the Magic quadrants from Gartner over the last seven years and tracked the movement of each company.

Looking at the seven-year timeline of these vendors it shows SmartBear and Worksoft both started in the “Visionaries” before breaking out into the “Leaders” quadrant. Both of them share the achievement of having featured in every quadrant over the past seven years, only in 2019 SmartBear is regarded as a leader while Worksoft has become a niche player. Eggplant (which started as TestPlant) took four years as a visionary to break through to the “Leaders” quadrant. It is only Tricentis that has held on to the “Leaders” title for the last five years and every year still gradually improving its position, they seem to be in a stable position.

The biggest vendors, IBM and Micro Focus (includes HPE) now reside in the “Challengers” quadrant, both having large portfolios of test tools but are currently not regarded as leaders in the test automation market. Of course, they remain a force to be reckoned with but interestingly that the three biggest leaders currently are all relatively small companies. Micro Focus has moved across three quadrants over the last seven years. Looking at the trajectory these companies have over the last few years, it promises to be an interesting 2020!

What it does mean for day to day running? As detailed in the Gartner’s report, each vendor has its strengths and weaknesses, it also does not include any of the open-source tools. So, if you are looking at Test Automation tooling in 2020, it is important to make a balanced decision, be true to yourself and your organisation. Ask for help, we have plenty of contacts with these vendors and we have an overwhelming experience with open-source tools… remember a test tool is not just for Christmas… It is a strategic implementation!!

The post Magic is in the air – Gartner’s Magic Quadrant for Software Test Automation appeared first on SogetiLabs.

Why self-evident things are not self-evident to a group

Par Tuomas Peurakoski

Let’s assume that you are going to take the bus 615 to Helsinki Airport. You are not leaving from the origin station but a couple of stations later. So you take your bag and wait at the bus stop, see the bus 615 approach, get on board and ride the bus to the airport. Simple as can be.

Let’s try the same again. This time you’re travelling with your friends in a group of 10. You all know that you need to take the bus 615 to the airport from the exact same bus stop. The bus approaches and none of you lift a finger. Chances are the bus drives right past you and none of you get to the airport. Why would that happen? Because social influence affects us. If even one of you raises their hand to the bus and says out loud “that’s our bus” the group would get on the bus.

It’s easy to take governance of your actions while you are alone. However, while working on projects we very seldom do them alone and as such we are subjected to the pros and cons of group work. The most self-evident things might become not so evident if nobody takes the ball on them.

This is why we need to write down what we have agreed upon and how the team does what it does. Defining a Definition of Done is a good example. What actually is “done”? Or how about the issue tracking? What do we actually write about the issues? Are we fixing everything right now? What is the process if we push something into a later time? We might think that we all agree that we’re going to fix something in February but unless we write it down and commit to it then someone might come in January and ask “why is this not fixed”? You might reply that “it’s what we decided” and then the person asks “who decided? When did we decide this?”

In that situation, everybody loves a good paper trail.

Even the most self-evident things might become non-evident if you challenge them. Let’s say that you barge in a room of developers and ask “who is responsible for setting the age in our system as a number”? Then count the hands that actually are willing to stand behind this self-evident thing while confused.

This is why one of the best things you can do in a group is to decide things together, write them down and agree on a process. If you, later on, find out the process is not working at all, decide together to change it as well.

And lastly: remember that even if you don’t decide anything it is also a decision. Just remember to write down that you decided not to decide.

Oh, by the way: if you thought that all the above is self-evident, well…

The post Why self-evident things are not self-evident to a group appeared first on SogetiLabs.

Presentation Andrew Keen – Utopia for Beginners

Par Thijs Pepping

Andrew Keen was among the earliest to write about the dangers of the Internet to our culture and society. In the next video, he will share his vision on Sogeti’s Executive Summit ‘Utopia for Beginners’.

About Andrew Keen

Keen’s new book, How to Fix the Future, based on research, analysis, and Keen’s own reporting in America and around the world, showcases global solutions for our digital predicament. Keen identifies five broad strategies to tackle the digital future: competitive innovation, government regulation, consumer choice, social responsibility by business leaders, and education. How to Fix the Future has been called “the most significant work so far in an emerging body of literature…in which technology’s smartest thinkers are raising alarm bells about the state of the Internet, and laying groundwork for how to fix it”

About the Utopia for Beginners event

The Utopia for Beginners events were held in order to discuss pressing questions and problems in IT. With over 250 IT Executives we explored new ways to innovate in a purposeful way. The input came from 14 inspiring keynotes given by academics, thought leaders and business leaders and was used in several round table discussions to learn from peers in the same and different industries. We summarized these valuable insights in our report ‘Utopia for Executives’ which you can download here.

The post Presentation Andrew Keen – Utopia for Beginners appeared first on SogetiLabs.

Laying the foundations for 2020 growth

Par Tori Hume

This January, do not ask me what my New Year’s resolutions are. Instead, ask me what I learned from my 2019 failures. 

In the Western World to many, December is a month of chaos marketed as a celebration of community. 

We spend the few days between the 1st and the 21st spreading ourselves thin trying to close off projects, finalise budgets and simultaneously spread Christmas joy, as we shop for and meet with, family and friends. 

When all is said and done the year comes to a close and a herd of emotionally and physically exhausted humans sit down to reflect on the previous 365 days. This mix is not necessarily conducive to a kind retrospective. Many people can be left feeling disillusioned with what they have not achieved or bitter about things that did not go as they dreamed. 

With January comes a fresh start and so many are quick to ignore their past failures and embark on a new set of goals. However, maybe we shouldn’t be so hasty to wipe the slate clean.  Instead of bounding towards the unknown in the year ahead, we should instead stop, and take the time to learn from what did not go our way in 2019. 

Be it a delayed project, a missed role, a failed exam or an ended relationship, there is something to be learned from each.  If you only look at your success, you are denying yourself the opportunity for growth and this will often result in you reliving past mistakes. 

In my office, at the end of each project I’ve worked on we have held a “Lessons Learned”. Here we look at what worked and what didn’t. Asking ourselves, what are the good and bad points that we can take with us into future projects. If you treat each experience as a learning opportunity, then they no longer represent mistakes or successes. Regardless of the end result, they become the building blocks for new experiences. 

Have you worked with difficult colleagues? Look at the interactions you had. Think of how these could have gone differently. Maybe you could have been calmer or taken the time to understand their drivers? If not, at the very least you can learn from them what behaviours you don’t want to replicate. 

Maybe a project got delayed. If so, can you identify the stress points that contributed to it? Are these things that you could mitigate against in future projects? Have you learned new reporting tools or governance tactics that you would like to implement on future engagements?

A wise man once told me “Experience is knowing how not to do something” and a wise culture taught me Wabi Sabi is allowing yourself to accept the transience and imperfections of life.

So, before you set your 2020 work KPI’s or personal goals, take a moment to look kindly on your ups and downs of 2019. Break them into manageable components and use them to build the foundations of your 2020.

The post Laying the foundations for 2020 growth appeared first on SogetiLabs.

Webinar: Identity Access Management on cloud

Par Amarjeet Singh

As more and more customers are transforming their applications to cloud the main questions arises are:

  • How do I secure my data and processes while it’s running on cloud?
  • How do I make sure all my applications are secured with proper authentication and authorization approach implemented?

Watch the webinar to learn how to address these challenges.

The post Webinar: Identity Access Management on cloud appeared first on SogetiLabs.