Jan 23 17:27:10.539748 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 23 17:27:10.539766 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Jan 23 15:38:20 -00 2026 Jan 23 17:27:10.539773 kernel: KASLR enabled Jan 23 17:27:10.539777 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 17:27:10.539782 kernel: printk: legacy bootconsole [pl11] enabled Jan 23 17:27:10.539786 kernel: efi: EFI v2.7 by EDK II Jan 23 17:27:10.539792 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Jan 23 17:27:10.539796 kernel: random: crng init done Jan 23 17:27:10.539800 kernel: secureboot: Secure boot disabled Jan 23 17:27:10.539804 kernel: ACPI: Early table checksum verification disabled Jan 23 17:27:10.539808 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 23 17:27:10.539812 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539817 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539822 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 17:27:10.539827 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539832 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539836 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539842 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539846 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539851 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539855 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 17:27:10.539859 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:27:10.539864 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 17:27:10.539868 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:27:10.539873 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 23 17:27:10.539877 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 23 17:27:10.539881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 23 17:27:10.539887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 23 17:27:10.539892 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 23 17:27:10.539896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 23 17:27:10.539900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 23 17:27:10.539905 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 23 17:27:10.539909 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 23 17:27:10.539914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 23 17:27:10.539918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 23 17:27:10.539922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 23 17:27:10.539927 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 23 17:27:10.539931 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 23 17:27:10.539937 kernel: Zone ranges: Jan 23 17:27:10.539941 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 17:27:10.539948 kernel: DMA32 empty Jan 23 17:27:10.539953 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 17:27:10.539958 kernel: Device empty Jan 23 17:27:10.539963 kernel: Movable zone start for each node Jan 23 17:27:10.539968 kernel: Early memory node ranges Jan 23 17:27:10.539973 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 17:27:10.539977 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 23 17:27:10.539982 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 23 17:27:10.539987 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 23 17:27:10.539991 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 23 17:27:10.539996 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 23 17:27:10.540001 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 17:27:10.540006 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 17:27:10.540011 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 17:27:10.540016 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 23 17:27:10.540021 kernel: psci: probing for conduit method from ACPI. Jan 23 17:27:10.540025 kernel: psci: PSCIv1.3 detected in firmware. Jan 23 17:27:10.540030 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:27:10.540035 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 17:27:10.540039 kernel: psci: SMC Calling Convention v1.4 Jan 23 17:27:10.540044 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 17:27:10.540049 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 17:27:10.540053 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:27:10.540058 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:27:10.540064 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:27:10.540068 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:27:10.540073 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 23 17:27:10.540078 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:27:10.540082 kernel: CPU features: detected: Spectre-v4 Jan 23 17:27:10.540087 kernel: CPU features: detected: Spectre-BHB Jan 23 17:27:10.540092 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 17:27:10.540096 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 17:27:10.540101 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 23 17:27:10.540106 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 17:27:10.540111 kernel: alternatives: applying boot alternatives Jan 23 17:27:10.540117 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=35f959b0e84cd72dec35dcaa9fdae098b059a7436b8ff34bc604c87ac6375079 Jan 23 17:27:10.540122 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:27:10.540127 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:27:10.540131 kernel: Fallback order for Node 0: 0 Jan 23 17:27:10.540136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 23 17:27:10.540141 kernel: Policy zone: Normal Jan 23 17:27:10.540145 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:27:10.540150 kernel: software IO TLB: area num 2. Jan 23 17:27:10.540155 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Jan 23 17:27:10.540159 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:27:10.540165 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:27:10.540170 kernel: rcu: RCU event tracing is enabled. Jan 23 17:27:10.540175 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:27:10.540180 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:27:10.540185 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:27:10.540189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:27:10.540194 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:27:10.540199 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:27:10.540204 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:27:10.540208 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:27:10.540213 kernel: GICv3: 960 SPIs implemented Jan 23 17:27:10.540219 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:27:10.540223 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:27:10.540228 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 17:27:10.540232 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 23 17:27:10.540237 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 17:27:10.540242 kernel: ITS: No ITS available, not enabling LPIs Jan 23 17:27:10.540247 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:27:10.540252 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 23 17:27:10.540256 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 17:27:10.540261 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 23 17:27:10.542293 kernel: Console: colour dummy device 80x25 Jan 23 17:27:10.542329 kernel: printk: legacy console [tty1] enabled Jan 23 17:27:10.542335 kernel: ACPI: Core revision 20240827 Jan 23 17:27:10.542341 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 23 17:27:10.542346 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:27:10.542351 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:27:10.542357 kernel: landlock: Up and running. Jan 23 17:27:10.542362 kernel: SELinux: Initializing. Jan 23 17:27:10.542368 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:27:10.542373 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:27:10.542378 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 23 17:27:10.542384 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 23 17:27:10.542393 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 17:27:10.542399 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:27:10.542405 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:27:10.542411 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:27:10.542416 kernel: Remapping and enabling EFI services. Jan 23 17:27:10.542422 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:27:10.542428 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:27:10.542433 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 17:27:10.542438 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 23 17:27:10.542444 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:27:10.542450 kernel: SMP: Total of 2 processors activated. Jan 23 17:27:10.542455 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:27:10.542460 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:27:10.542466 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 17:27:10.542471 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 17:27:10.542476 kernel: CPU features: detected: Common not Private translations Jan 23 17:27:10.542483 kernel: CPU features: detected: CRC32 instructions Jan 23 17:27:10.542488 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 23 17:27:10.542493 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 17:27:10.542499 kernel: CPU features: detected: LSE atomic instructions Jan 23 17:27:10.542504 kernel: CPU features: detected: Privileged Access Never Jan 23 17:27:10.542509 kernel: CPU features: detected: Speculation barrier (SB) Jan 23 17:27:10.542515 kernel: CPU features: detected: TLB range maintenance instructions Jan 23 17:27:10.542521 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 17:27:10.542526 kernel: CPU features: detected: Scalable Vector Extension Jan 23 17:27:10.542531 kernel: alternatives: applying system-wide alternatives Jan 23 17:27:10.542537 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 17:27:10.542542 kernel: SVE: maximum available vector length 16 bytes per vector Jan 23 17:27:10.542547 kernel: SVE: default vector length 16 bytes per vector Jan 23 17:27:10.542554 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Jan 23 17:27:10.542560 kernel: devtmpfs: initialized Jan 23 17:27:10.542566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:27:10.542571 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:27:10.542576 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 17:27:10.542581 kernel: 0 pages in range for non-PLT usage Jan 23 17:27:10.542587 kernel: 515168 pages in range for PLT usage Jan 23 17:27:10.542592 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:27:10.542598 kernel: SMBIOS 3.1.0 present. Jan 23 17:27:10.542604 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 23 17:27:10.542609 kernel: DMI: Memory slots populated: 2/2 Jan 23 17:27:10.542614 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:27:10.542620 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:27:10.542625 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:27:10.542630 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:27:10.542636 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:27:10.542642 kernel: audit: type=2000 audit(0.060:1): state=initialized audit_enabled=0 res=1 Jan 23 17:27:10.542647 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:27:10.542653 kernel: cpuidle: using governor menu Jan 23 17:27:10.542658 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:27:10.542663 kernel: ASID allocator initialised with 32768 entries Jan 23 17:27:10.542669 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:27:10.542674 kernel: Serial: AMBA PL011 UART driver Jan 23 17:27:10.542680 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:27:10.542686 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:27:10.542691 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:27:10.542696 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:27:10.542702 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:27:10.542707 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:27:10.542712 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:27:10.542718 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:27:10.542724 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:27:10.542729 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:27:10.542734 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:27:10.542739 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:27:10.542745 kernel: ACPI: Interpreter enabled Jan 23 17:27:10.542750 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:27:10.542756 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 17:27:10.542762 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 17:27:10.542767 kernel: printk: legacy bootconsole [pl11] disabled Jan 23 17:27:10.542772 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 17:27:10.542778 kernel: ACPI: CPU0 has been hot-added Jan 23 17:27:10.542783 kernel: ACPI: CPU1 has been hot-added Jan 23 17:27:10.542788 kernel: iommu: Default domain type: Translated Jan 23 17:27:10.542794 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:27:10.542800 kernel: efivars: Registered efivars operations Jan 23 17:27:10.542805 kernel: vgaarb: loaded Jan 23 17:27:10.542810 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:27:10.542815 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:27:10.542821 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:27:10.542826 kernel: pnp: PnP ACPI init Jan 23 17:27:10.542832 kernel: pnp: PnP ACPI: found 0 devices Jan 23 17:27:10.542837 kernel: NET: Registered PF_INET protocol family Jan 23 17:27:10.542843 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:27:10.542848 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:27:10.542853 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:27:10.542859 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:27:10.542864 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:27:10.542870 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:27:10.542876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:27:10.542881 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:27:10.542886 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:27:10.542892 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:27:10.542897 kernel: kvm [1]: HYP mode not available Jan 23 17:27:10.542902 kernel: Initialise system trusted keyrings Jan 23 17:27:10.542907 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:27:10.542913 kernel: Key type asymmetric registered Jan 23 17:27:10.542919 kernel: Asymmetric key parser 'x509' registered Jan 23 17:27:10.542924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:27:10.542929 kernel: io scheduler mq-deadline registered Jan 23 17:27:10.542934 kernel: io scheduler kyber registered Jan 23 17:27:10.542939 kernel: io scheduler bfq registered Jan 23 17:27:10.542945 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:27:10.542951 kernel: thunder_xcv, ver 1.0 Jan 23 17:27:10.542956 kernel: thunder_bgx, ver 1.0 Jan 23 17:27:10.542961 kernel: nicpf, ver 1.0 Jan 23 17:27:10.542966 kernel: nicvf, ver 1.0 Jan 23 17:27:10.543133 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:27:10.543203 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:27:07 UTC (1769189227) Jan 23 17:27:10.543212 kernel: efifb: probing for efifb Jan 23 17:27:10.543218 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 17:27:10.543223 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 17:27:10.543228 kernel: efifb: scrolling: redraw Jan 23 17:27:10.543233 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 17:27:10.543239 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 17:27:10.543244 kernel: fb0: EFI VGA frame buffer device Jan 23 17:27:10.543250 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 17:27:10.543256 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:27:10.543261 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 17:27:10.543278 kernel: watchdog: NMI not fully supported Jan 23 17:27:10.543285 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:27:10.543290 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:27:10.543295 kernel: Segment Routing with IPv6 Jan 23 17:27:10.543302 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:27:10.543307 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:27:10.543312 kernel: Key type dns_resolver registered Jan 23 17:27:10.543317 kernel: registered taskstats version 1 Jan 23 17:27:10.543323 kernel: Loading compiled-in X.509 certificates Jan 23 17:27:10.543328 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2bef814d3854848add18d21bd2681c3d03c60f56' Jan 23 17:27:10.543334 kernel: Demotion targets for Node 0: null Jan 23 17:27:10.543340 kernel: Key type .fscrypt registered Jan 23 17:27:10.543345 kernel: Key type fscrypt-provisioning registered Jan 23 17:27:10.543350 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:27:10.543356 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:27:10.543361 kernel: ima: No architecture policies found Jan 23 17:27:10.543366 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:27:10.543372 kernel: clk: Disabling unused clocks Jan 23 17:27:10.543377 kernel: PM: genpd: Disabling unused power domains Jan 23 17:27:10.543383 kernel: Freeing unused kernel memory: 12480K Jan 23 17:27:10.543389 kernel: Run /init as init process Jan 23 17:27:10.543394 kernel: with arguments: Jan 23 17:27:10.543399 kernel: /init Jan 23 17:27:10.543404 kernel: with environment: Jan 23 17:27:10.543410 kernel: HOME=/ Jan 23 17:27:10.543415 kernel: TERM=linux Jan 23 17:27:10.543421 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 17:27:10.543427 kernel: SCSI subsystem initialized Jan 23 17:27:10.543432 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 17:27:10.543437 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 17:27:10.543531 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 17:27:10.543539 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 17:27:10.543546 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 17:27:10.543551 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 17:27:10.543557 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 17:27:10.543562 kernel: PTP clock support registered Jan 23 17:27:10.543567 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 17:27:10.543572 kernel: hv_vmbus: registering driver hv_utils Jan 23 17:27:10.543578 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 17:27:10.543584 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 17:27:10.543589 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 17:27:10.543595 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 17:27:10.543690 kernel: scsi host0: storvsc_host_t Jan 23 17:27:10.543770 kernel: scsi host1: storvsc_host_t Jan 23 17:27:10.543860 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 17:27:10.543943 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 17:27:10.544018 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 17:27:10.544092 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jan 23 17:27:10.544167 kernel: sd 1:0:0:0: [sda] Write Protect is off Jan 23 17:27:10.544241 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 17:27:10.546392 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 17:27:10.546511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 17:27:10.546587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 17:27:10.546594 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:27:10.546674 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jan 23 17:27:10.546753 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Jan 23 17:27:10.546762 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 17:27:10.546835 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Jan 23 17:27:10.546842 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:27:10.546848 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:27:10.546853 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:27:10.546859 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:27:10.546864 kernel: raid6: neonx8 gen() 18561 MB/s Jan 23 17:27:10.546871 kernel: raid6: neonx4 gen() 18577 MB/s Jan 23 17:27:10.546876 kernel: raid6: neonx2 gen() 17097 MB/s Jan 23 17:27:10.546882 kernel: raid6: neonx1 gen() 15011 MB/s Jan 23 17:27:10.546887 kernel: raid6: int64x8 gen() 10539 MB/s Jan 23 17:27:10.546892 kernel: raid6: int64x4 gen() 10617 MB/s Jan 23 17:27:10.546897 kernel: raid6: int64x2 gen() 8969 MB/s Jan 23 17:27:10.546902 kernel: raid6: int64x1 gen() 7010 MB/s Jan 23 17:27:10.546909 kernel: raid6: using algorithm neonx4 gen() 18577 MB/s Jan 23 17:27:10.546914 kernel: raid6: .... xor() 15120 MB/s, rmw enabled Jan 23 17:27:10.546920 kernel: raid6: using neon recovery algorithm Jan 23 17:27:10.546925 kernel: xor: measuring software checksum speed Jan 23 17:27:10.546930 kernel: 8regs : 28591 MB/sec Jan 23 17:27:10.546936 kernel: 32regs : 28297 MB/sec Jan 23 17:27:10.546941 kernel: arm64_neon : 37398 MB/sec Jan 23 17:27:10.546946 kernel: xor: using function: arm64_neon (37398 MB/sec) Jan 23 17:27:10.546953 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:27:10.546958 kernel: BTRFS: device fsid 8d2a73a7-ed2a-4757-891b-9df844aa914e devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (429) Jan 23 17:27:10.546964 kernel: BTRFS info (device dm-0): first mount of filesystem 8d2a73a7-ed2a-4757-891b-9df844aa914e Jan 23 17:27:10.546969 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:27:10.546975 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:27:10.546980 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:27:10.546986 kernel: loop: module loaded Jan 23 17:27:10.546992 kernel: loop0: detected capacity change from 0 to 91840 Jan 23 17:27:10.546998 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:27:10.547004 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:27:10.547012 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:27:10.547018 systemd[1]: Detected virtualization microsoft. Jan 23 17:27:10.547025 systemd[1]: Detected architecture arm64. Jan 23 17:27:10.547031 systemd[1]: Running in initrd. Jan 23 17:27:10.547036 systemd[1]: No hostname configured, using default hostname. Jan 23 17:27:10.547042 systemd[1]: Hostname set to . Jan 23 17:27:10.547048 systemd[1]: Initializing machine ID from random generator. Jan 23 17:27:10.547054 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:27:10.547060 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:27:10.547066 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:27:10.547072 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:27:10.547079 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:27:10.547085 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:27:10.547091 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:27:10.547098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:27:10.547104 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:27:10.547110 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:27:10.547116 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:27:10.547121 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:27:10.547127 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:27:10.547133 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:27:10.547140 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:27:10.547146 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:27:10.547152 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:27:10.547157 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:27:10.547163 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:27:10.547169 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:27:10.547175 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:27:10.547186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:27:10.547193 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:27:10.547199 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:27:10.547205 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:27:10.547211 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:27:10.547218 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:27:10.547224 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:27:10.547230 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:27:10.547236 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:27:10.547242 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:27:10.547248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:27:10.547324 systemd-journald[566]: Collecting audit messages is enabled. Jan 23 17:27:10.547342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:27:10.547350 systemd-journald[566]: Journal started Jan 23 17:27:10.547364 systemd-journald[566]: Runtime Journal (/run/log/journal/437ca1b26f6a480785a9f44d48bbc9b5) is 8M, max 78.3M, 70.3M free. Jan 23 17:27:10.571124 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:27:10.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.573303 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:27:10.592023 kernel: audit: type=1130 audit(1769189230.570:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.588063 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:27:10.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.608035 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:27:10.632439 kernel: audit: type=1130 audit(1769189230.586:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.632463 kernel: audit: type=1130 audit(1769189230.606:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.634907 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:27:10.656828 kernel: audit: type=1130 audit(1769189230.631:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.656853 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:27:10.666158 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:27:10.710438 systemd-modules-load[569]: Inserted module 'br_netfilter' Jan 23 17:27:10.714651 kernel: Bridge firewalling registered Jan 23 17:27:10.714749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:27:10.767855 kernel: audit: type=1130 audit(1769189230.718:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.728717 systemd-tmpfiles[578]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:27:10.730601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:27:10.783371 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:27:10.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.813305 kernel: audit: type=1130 audit(1769189230.800:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.813203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:27:10.822129 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:27:10.855525 kernel: audit: type=1130 audit(1769189230.817:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.855548 kernel: audit: type=1130 audit(1769189230.837:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.856951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:27:10.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.879289 kernel: audit: type=1130 audit(1769189230.865:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.879476 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:27:10.889000 audit: BPF prog-id=6 op=LOAD Jan 23 17:27:10.896942 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:27:10.906249 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:27:10.924404 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:27:10.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.941347 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:27:10.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:10.947773 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:27:11.014143 dracut-cmdline[604]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=35f959b0e84cd72dec35dcaa9fdae098b059a7436b8ff34bc604c87ac6375079 Jan 23 17:27:11.044666 systemd-resolved[589]: Positive Trust Anchors: Jan 23 17:27:11.047309 systemd-resolved[589]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:27:11.047313 systemd-resolved[589]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 17:27:11.047337 systemd-resolved[589]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:27:11.077083 systemd-resolved[589]: Defaulting to hostname 'linux'. Jan 23 17:27:11.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.077875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:27:11.095551 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:27:11.227296 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:27:11.267296 kernel: iscsi: registered transport (tcp) Jan 23 17:27:11.298151 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:27:11.298169 kernel: QLogic iSCSI HBA Driver Jan 23 17:27:11.347941 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:27:11.366813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:27:11.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.373537 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:27:11.424357 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:27:11.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.430469 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:27:11.454981 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:27:11.479064 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:27:11.509539 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 23 17:27:11.509565 kernel: audit: type=1130 audit(1769189231.484:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.507393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:27:11.524389 kernel: audit: type=1334 audit(1769189231.501:18): prog-id=7 op=LOAD Jan 23 17:27:11.524423 kernel: audit: type=1334 audit(1769189231.501:19): prog-id=8 op=LOAD Jan 23 17:27:11.501000 audit: BPF prog-id=7 op=LOAD Jan 23 17:27:11.501000 audit: BPF prog-id=8 op=LOAD Jan 23 17:27:11.584926 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:27:11.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.609290 kernel: audit: type=1130 audit(1769189231.590:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.627486 systemd-udevd[851]: Using default interface naming scheme 'v257'. Jan 23 17:27:11.637247 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:27:11.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.661294 kernel: audit: type=1130 audit(1769189231.644:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.661625 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:27:11.671000 audit: BPF prog-id=9 op=LOAD Jan 23 17:27:11.678286 kernel: audit: type=1334 audit(1769189231.671:22): prog-id=9 op=LOAD Jan 23 17:27:11.681436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:27:11.693850 dracut-pre-trigger[941]: rd.md=0: removing MD RAID activation Jan 23 17:27:11.717322 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:27:11.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.743451 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:27:11.758575 kernel: audit: type=1130 audit(1769189231.727:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.754024 systemd-networkd[942]: lo: Link UP Jan 23 17:27:11.783920 kernel: audit: type=1130 audit(1769189231.762:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.754027 systemd-networkd[942]: lo: Gained carrier Jan 23 17:27:11.758485 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:27:11.763863 systemd[1]: Reached target network.target - Network. Jan 23 17:27:11.817110 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:27:11.856922 kernel: audit: type=1130 audit(1769189231.829:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.846044 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:27:11.925468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:27:11.925614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:27:11.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.955373 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:27:11.980277 kernel: audit: type=1131 audit(1769189231.934:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:11.980316 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 17:27:11.980324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#254 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 17:27:11.982605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:27:12.037388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:27:12.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:12.075493 kernel: hv_netvsc 00224877-1ce4-0022-4877-1ce400224877 eth0: VF slot 1 added Jan 23 17:27:12.080252 systemd-networkd[942]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:27:12.080265 systemd-networkd[942]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:27:12.092670 systemd-networkd[942]: eth0: Link UP Jan 23 17:27:12.092956 systemd-networkd[942]: eth0: Gained carrier Jan 23 17:27:12.092971 systemd-networkd[942]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:27:12.121284 kernel: hv_vmbus: registering driver hv_pci Jan 23 17:27:12.132065 kernel: hv_pci 6f03bc45-326b-4e72-bd18-db3572ede6a8: PCI VMBus probing: Using version 0x10004 Jan 23 17:27:12.145524 kernel: hv_pci 6f03bc45-326b-4e72-bd18-db3572ede6a8: PCI host bridge to bus 326b:00 Jan 23 17:27:12.145751 kernel: pci_bus 326b:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 17:27:12.145882 kernel: pci_bus 326b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 17:27:12.159716 kernel: pci 326b:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 23 17:27:12.159323 systemd-networkd[942]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 17:27:12.174337 kernel: pci 326b:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 17:27:12.180360 kernel: pci 326b:00:02.0: enabling Extended Tags Jan 23 17:27:12.200367 kernel: pci 326b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 326b:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 23 17:27:12.212552 kernel: pci_bus 326b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 17:27:12.212835 kernel: pci 326b:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 23 17:27:12.399162 kernel: mlx5_core 326b:00:02.0: enabling device (0000 -> 0002) Jan 23 17:27:12.408199 kernel: mlx5_core 326b:00:02.0: PTM is not supported by PCIe Jan 23 17:27:12.408457 kernel: mlx5_core 326b:00:02.0: firmware version: 16.30.5026 Jan 23 17:27:12.491422 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 17:27:12.503867 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:27:12.551871 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 17:27:12.600567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 17:27:12.621107 kernel: hv_netvsc 00224877-1ce4-0022-4877-1ce400224877 eth0: VF registering: eth1 Jan 23 17:27:12.622318 kernel: mlx5_core 326b:00:02.0 eth1: joined to eth0 Jan 23 17:27:12.630418 kernel: mlx5_core 326b:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 17:27:12.644298 kernel: mlx5_core 326b:00:02.0 enP12907s1: renamed from eth1 Jan 23 17:27:12.644648 systemd-networkd[942]: eth1: Interface name change detected, renamed to enP12907s1. Jan 23 17:27:12.660457 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 17:27:12.773308 kernel: mlx5_core 326b:00:02.0 enP12907s1: Link up Jan 23 17:27:12.816127 systemd-networkd[942]: enP12907s1: Link UP Jan 23 17:27:12.821360 kernel: hv_netvsc 00224877-1ce4-0022-4877-1ce400224877 eth0: Data path switched to VF: enP12907s1 Jan 23 17:27:12.820106 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:27:12.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:12.831245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:27:12.837247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:27:12.855539 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:27:12.867062 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:27:12.907426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:27:12.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:13.098706 systemd-networkd[942]: enP12907s1: Gained carrier Jan 23 17:27:13.378596 systemd-networkd[942]: eth0: Gained IPv6LL Jan 23 17:27:13.829297 disk-uuid[1053]: Warning: The kernel is still using the old partition table. Jan 23 17:27:13.829297 disk-uuid[1053]: The new table will be used at the next reboot or after you Jan 23 17:27:13.829297 disk-uuid[1053]: run partprobe(8) or kpartx(8) Jan 23 17:27:13.829297 disk-uuid[1053]: The operation has completed successfully. Jan 23 17:27:13.842668 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:27:13.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:13.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:13.846207 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:27:13.853135 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:27:13.918291 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1213) Jan 23 17:27:13.928127 kernel: BTRFS info (device sda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:27:13.928173 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:27:13.953582 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:27:13.953605 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:27:13.964288 kernel: BTRFS info (device sda6): last unmount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:27:13.965306 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:27:13.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:13.970806 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:27:15.151822 ignition[1232]: Ignition 2.24.0 Jan 23 17:27:15.151836 ignition[1232]: Stage: fetch-offline Jan 23 17:27:15.154773 ignition[1232]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:27:15.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:15.157930 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:27:15.154788 ignition[1232]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:27:15.165130 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:27:15.154899 ignition[1232]: parsed url from cmdline: "" Jan 23 17:27:15.154907 ignition[1232]: no config URL provided Jan 23 17:27:15.154914 ignition[1232]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:27:15.154923 ignition[1232]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:27:15.154928 ignition[1232]: failed to fetch config: resource requires networking Jan 23 17:27:15.156467 ignition[1232]: Ignition finished successfully Jan 23 17:27:15.207945 ignition[1238]: Ignition 2.24.0 Jan 23 17:27:15.207951 ignition[1238]: Stage: fetch Jan 23 17:27:15.208210 ignition[1238]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:27:15.208218 ignition[1238]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:27:15.208320 ignition[1238]: parsed url from cmdline: "" Jan 23 17:27:15.208322 ignition[1238]: no config URL provided Jan 23 17:27:15.208326 ignition[1238]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:27:15.208331 ignition[1238]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:27:15.208349 ignition[1238]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 17:27:15.308948 ignition[1238]: GET result: OK Jan 23 17:27:15.309059 ignition[1238]: config has been read from IMDS userdata Jan 23 17:27:15.309073 ignition[1238]: parsing config with SHA512: ad47c99cd4b2fca9b6ab5fc436652643a4c7136ac2230098b841110b4e1292471bb5438071935304dc126aac61a313dde4a913f45c52ab1a466b43916263740f Jan 23 17:27:15.315982 unknown[1238]: fetched base config from "system" Jan 23 17:27:15.315988 unknown[1238]: fetched base config from "system" Jan 23 17:27:15.316402 ignition[1238]: fetch: fetch complete Jan 23 17:27:15.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:15.315993 unknown[1238]: fetched user config from "azure" Jan 23 17:27:15.316407 ignition[1238]: fetch: fetch passed Jan 23 17:27:15.320256 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:27:15.316452 ignition[1238]: Ignition finished successfully Jan 23 17:27:15.331983 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:27:15.366739 ignition[1244]: Ignition 2.24.0 Jan 23 17:27:15.366753 ignition[1244]: Stage: kargs Jan 23 17:27:15.366962 ignition[1244]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:27:15.374554 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:27:15.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:15.366971 ignition[1244]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:27:15.382049 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:27:15.367619 ignition[1244]: kargs: kargs passed Jan 23 17:27:15.367666 ignition[1244]: Ignition finished successfully Jan 23 17:27:15.416965 ignition[1250]: Ignition 2.24.0 Jan 23 17:27:15.416982 ignition[1250]: Stage: disks Jan 23 17:27:15.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:15.419720 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:27:15.418082 ignition[1250]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:27:15.424761 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:27:15.418091 ignition[1250]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:27:15.432377 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:27:15.418627 ignition[1250]: disks: disks passed Jan 23 17:27:15.441388 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:27:15.418678 ignition[1250]: Ignition finished successfully Jan 23 17:27:15.449148 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:27:15.458326 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:27:15.468311 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:27:15.593637 systemd-fsck[1258]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 23 17:27:15.603139 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:27:15.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:15.610803 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:27:16.206319 kernel: EXT4-fs (sda9): mounted filesystem 6e8555bb-6998-46ec-8ba6-5a7a415f09ac r/w with ordered data mode. Quota mode: none. Jan 23 17:27:16.206903 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:27:16.211084 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:27:16.268298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:27:16.287140 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:27:16.301194 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 17:27:16.318525 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:27:16.330357 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1272) Jan 23 17:27:16.330378 kernel: BTRFS info (device sda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:27:16.318577 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:27:16.364229 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:27:16.364265 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:27:16.364278 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:27:16.347479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:27:16.359369 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:27:16.371495 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:27:16.890916 coreos-metadata[1274]: Jan 23 17:27:16.890 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 17:27:16.898715 coreos-metadata[1274]: Jan 23 17:27:16.898 INFO Fetch successful Jan 23 17:27:16.902760 coreos-metadata[1274]: Jan 23 17:27:16.902 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 17:27:16.910871 coreos-metadata[1274]: Jan 23 17:27:16.910 INFO Fetch successful Jan 23 17:27:16.927977 coreos-metadata[1274]: Jan 23 17:27:16.927 INFO wrote hostname ci-4547.1.0-a-f00ee6181d to /sysroot/etc/hostname Jan 23 17:27:16.935130 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:27:16.948435 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 23 17:27:16.948460 kernel: audit: type=1130 audit(1769189236.939:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:16.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:18.299570 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:27:18.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:18.309965 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:27:18.333978 kernel: audit: type=1130 audit(1769189238.307:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:18.326677 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:27:18.359897 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:27:18.370349 kernel: BTRFS info (device sda6): last unmount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:27:18.379436 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:27:18.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:18.400439 ignition[1377]: INFO : Ignition 2.24.0 Jan 23 17:27:18.400439 ignition[1377]: INFO : Stage: mount Jan 23 17:27:18.400439 ignition[1377]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:27:18.400439 ignition[1377]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:27:18.400439 ignition[1377]: INFO : mount: mount passed Jan 23 17:27:18.400439 ignition[1377]: INFO : Ignition finished successfully Jan 23 17:27:18.444038 kernel: audit: type=1130 audit(1769189238.386:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:18.444064 kernel: audit: type=1130 audit(1769189238.407:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:18.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:18.401672 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:27:18.424439 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:27:18.451389 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:27:18.475316 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1386) Jan 23 17:27:18.475367 kernel: BTRFS info (device sda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:27:18.485200 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:27:18.494309 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:27:18.494334 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:27:18.495959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:27:18.526897 ignition[1403]: INFO : Ignition 2.24.0 Jan 23 17:27:18.526897 ignition[1403]: INFO : Stage: files Jan 23 17:27:18.533421 ignition[1403]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:27:18.533421 ignition[1403]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:27:18.533421 ignition[1403]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:27:18.565301 ignition[1403]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:27:18.565301 ignition[1403]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:27:18.644890 ignition[1403]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:27:18.650438 ignition[1403]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:27:18.650438 ignition[1403]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:27:18.645324 unknown[1403]: wrote ssh authorized keys file for user: core Jan 23 17:27:18.671611 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:27:18.680566 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 17:27:18.711552 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:27:18.903614 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:27:18.903614 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:27:18.919239 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:27:18.919239 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:27:18.919239 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:27:18.919239 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:27:18.919239 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:27:18.919239 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:27:18.919239 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:27:18.970656 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:27:18.970656 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:27:18.970656 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:27:18.970656 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:27:18.970656 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:27:18.970656 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 17:27:19.451952 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:27:19.666468 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:27:19.666468 ignition[1403]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:27:19.715416 ignition[1403]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:27:19.723390 ignition[1403]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:27:19.723390 ignition[1403]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:27:19.723390 ignition[1403]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:27:19.761378 kernel: audit: type=1130 audit(1769189239.735:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.761451 ignition[1403]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:27:19.761451 ignition[1403]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:27:19.761451 ignition[1403]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:27:19.761451 ignition[1403]: INFO : files: files passed Jan 23 17:27:19.761451 ignition[1403]: INFO : Ignition finished successfully Jan 23 17:27:19.732213 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:27:19.755893 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:27:19.791646 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:27:19.803614 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:27:19.818753 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:27:19.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.857755 kernel: audit: type=1130 audit(1769189239.831:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.857824 kernel: audit: type=1131 audit(1769189239.831:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.864031 initrd-setup-root-after-ignition[1434]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:27:19.864031 initrd-setup-root-after-ignition[1434]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:27:19.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.896826 initrd-setup-root-after-ignition[1438]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:27:19.913917 kernel: audit: type=1130 audit(1769189239.876:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.871631 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:27:19.877709 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:27:19.903294 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:27:19.964194 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:27:19.964332 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:27:19.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.993520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:27:19.997986 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:27:20.023913 kernel: audit: type=1130 audit(1769189239.973:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.023940 kernel: audit: type=1131 audit(1769189239.973:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:19.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.019721 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:27:20.024496 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:27:20.055344 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:27:20.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.067662 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:27:20.096406 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:27:20.096531 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:27:20.101839 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:27:20.112025 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:27:20.120240 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:27:20.120377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:27:20.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.142109 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:27:20.146855 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:27:20.154495 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:27:20.162669 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:27:20.171699 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:27:20.181776 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:27:20.191336 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:27:20.200498 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:27:20.211094 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:27:20.220470 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:27:20.228940 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:27:20.236392 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:27:20.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.236533 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:27:20.248661 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:27:20.254149 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:27:20.263092 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:27:20.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.267311 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:27:20.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.272942 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:27:20.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.273061 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:27:20.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.286958 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:27:20.287064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:27:20.292501 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:27:20.292579 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:27:20.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.302723 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 17:27:20.302807 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:27:20.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.378363 ignition[1458]: INFO : Ignition 2.24.0 Jan 23 17:27:20.378363 ignition[1458]: INFO : Stage: umount Jan 23 17:27:20.378363 ignition[1458]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:27:20.378363 ignition[1458]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:27:20.378363 ignition[1458]: INFO : umount: umount passed Jan 23 17:27:20.378363 ignition[1458]: INFO : Ignition finished successfully Jan 23 17:27:20.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.316496 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:27:20.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.332997 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:27:20.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.333186 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:27:20.343161 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:27:20.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.355913 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:27:20.356097 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:27:20.373259 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:27:20.373385 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:27:20.382626 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:27:20.382736 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:27:20.394942 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:27:20.395059 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:27:20.403980 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:27:20.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.404078 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:27:20.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.411944 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:27:20.412005 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:27:20.419027 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:27:20.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.419064 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:27:20.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.427116 systemd[1]: Stopped target network.target - Network. Jan 23 17:27:20.435382 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:27:20.435473 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:27:20.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.445153 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:27:20.591000 audit: BPF prog-id=9 op=UNLOAD Jan 23 17:27:20.591000 audit: BPF prog-id=6 op=UNLOAD Jan 23 17:27:20.453307 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:27:20.457323 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:27:20.462636 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:27:20.466632 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:27:20.482468 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:27:20.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.482539 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:27:20.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.495711 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:27:20.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.495755 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:27:20.504146 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 17:27:20.504166 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:27:20.512677 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:27:20.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.512741 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:27:20.520467 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:27:20.520502 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:27:20.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.528541 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:27:20.537177 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:27:20.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.546388 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:27:20.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.547020 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:27:20.760504 kernel: hv_netvsc 00224877-1ce4-0022-4877-1ce400224877 eth0: Data path switched from VF: enP12907s1 Jan 23 17:27:20.547136 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:27:20.557419 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:27:20.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.557506 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:27:20.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.572435 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:27:20.572557 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:27:20.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.593746 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:27:20.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.603480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:27:20.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.603531 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:27:20.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.615865 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:27:20.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:20.625752 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:27:20.625841 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:27:20.634850 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:27:20.634914 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:27:20.642920 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:27:20.642970 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:27:20.657799 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:27:20.680568 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:27:20.685397 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:27:20.691395 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:27:20.691487 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:27:20.699698 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:27:20.699745 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:27:20.708172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:27:20.708248 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:27:20.721488 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:27:20.721550 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:27:20.735896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:27:20.735952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:27:20.751861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:27:20.763356 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:27:20.763448 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:27:20.770995 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:27:20.982035 systemd-journald[566]: Received SIGTERM from PID 1 (systemd). Jan 23 17:27:20.771058 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:27:20.782124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:27:20.782207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:27:20.798650 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:27:20.798749 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:27:20.809710 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:27:20.809804 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:27:20.819147 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:27:20.819281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:27:20.828170 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:27:20.828301 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:27:20.837354 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:27:20.862507 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:27:20.889050 systemd[1]: Switching root. Jan 23 17:27:21.044418 systemd-journald[566]: Journal stopped Jan 23 17:27:26.132257 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:27:26.134348 kernel: SELinux: policy capability open_perms=1 Jan 23 17:27:26.134371 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:27:26.134379 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:27:26.134390 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:27:26.134396 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:27:26.134403 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:27:26.134409 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:27:26.134414 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:27:26.134422 systemd[1]: Successfully loaded SELinux policy in 162.368ms. Jan 23 17:27:26.134430 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 23 17:27:26.134437 kernel: audit: type=1403 audit(1769189242.034:88): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:27:26.134443 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.756ms. Jan 23 17:27:26.134451 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:27:26.134462 systemd[1]: Detected virtualization microsoft. Jan 23 17:27:26.134469 systemd[1]: Detected architecture arm64. Jan 23 17:27:26.134476 systemd[1]: Detected first boot. Jan 23 17:27:26.134483 systemd[1]: Hostname set to . Jan 23 17:27:26.134489 systemd[1]: Initializing machine ID from random generator. Jan 23 17:27:26.134497 kernel: audit: type=1334 audit(1769189242.764:89): prog-id=10 op=LOAD Jan 23 17:27:26.134503 kernel: audit: type=1334 audit(1769189242.764:90): prog-id=10 op=UNLOAD Jan 23 17:27:26.134509 kernel: audit: type=1334 audit(1769189242.768:91): prog-id=11 op=LOAD Jan 23 17:27:26.134515 kernel: audit: type=1334 audit(1769189242.768:92): prog-id=11 op=UNLOAD Jan 23 17:27:26.134522 zram_generator::config[1501]: No configuration found. Jan 23 17:27:26.134530 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:27:26.134536 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:27:26.134542 kernel: audit: type=1334 audit(1769189245.258:93): prog-id=12 op=LOAD Jan 23 17:27:26.134548 kernel: audit: type=1334 audit(1769189245.258:94): prog-id=3 op=UNLOAD Jan 23 17:27:26.134554 kernel: audit: type=1334 audit(1769189245.258:95): prog-id=13 op=LOAD Jan 23 17:27:26.134560 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:27:26.134567 kernel: audit: type=1334 audit(1769189245.258:96): prog-id=14 op=LOAD Jan 23 17:27:26.134573 kernel: audit: type=1334 audit(1769189245.258:97): prog-id=4 op=UNLOAD Jan 23 17:27:26.134580 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:27:26.134586 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:27:26.134594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:27:26.134601 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:27:26.134614 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:27:26.134622 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:27:26.134629 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:27:26.134636 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:27:26.134645 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:27:26.134652 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:27:26.134658 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:27:26.134665 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:27:26.134673 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:27:26.134680 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:27:26.134687 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:27:26.134693 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:27:26.134700 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 17:27:26.134707 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:27:26.134713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:27:26.134721 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:27:26.134728 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:27:26.134735 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:27:26.134741 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:27:26.134748 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:27:26.134755 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:27:26.134763 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 17:27:26.134770 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:27:26.134777 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:27:26.134783 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:27:26.134790 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:27:26.134798 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:27:26.134804 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:27:26.134811 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 17:27:26.134818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:27:26.134825 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 17:27:26.134833 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 17:27:26.134840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:27:26.134846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:27:26.134853 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:27:26.134859 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:27:26.134866 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:27:26.134873 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:27:26.134880 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:27:26.134887 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:27:26.134894 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:27:26.134902 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:27:26.134909 systemd[1]: Reached target machines.target - Containers. Jan 23 17:27:26.134916 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:27:26.134924 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:27:26.134931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:27:26.134938 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:27:26.134944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:27:26.134951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:27:26.134958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:27:26.134965 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:27:26.134973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:27:26.134979 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:27:26.134986 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:27:26.134993 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:27:26.135000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:27:26.135006 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:27:26.135013 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:27:26.135022 kernel: fuse: init (API version 7.41) Jan 23 17:27:26.135028 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:27:26.135035 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:27:26.135041 kernel: ACPI: bus type drm_connector registered Jan 23 17:27:26.135048 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:27:26.135055 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:27:26.135096 systemd-journald[1595]: Collecting audit messages is enabled. Jan 23 17:27:26.135111 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:27:26.135119 systemd-journald[1595]: Journal started Jan 23 17:27:26.135136 systemd-journald[1595]: Runtime Journal (/run/log/journal/102a6b23023d40739f60c3121faca739) is 8M, max 78.3M, 70.3M free. Jan 23 17:27:25.609000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 17:27:26.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.027000 audit: BPF prog-id=14 op=UNLOAD Jan 23 17:27:26.027000 audit: BPF prog-id=13 op=UNLOAD Jan 23 17:27:26.027000 audit: BPF prog-id=15 op=LOAD Jan 23 17:27:26.027000 audit: BPF prog-id=16 op=LOAD Jan 23 17:27:26.027000 audit: BPF prog-id=17 op=LOAD Jan 23 17:27:26.127000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 17:27:26.127000 audit[1595]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc31a3ea0 a2=4000 a3=0 items=0 ppid=1 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:27:26.127000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 17:27:25.242173 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:27:25.259916 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 17:27:25.265218 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:27:25.265592 systemd[1]: systemd-journald.service: Consumed 2.498s CPU time. Jan 23 17:27:26.157850 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:27:26.168433 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:27:26.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.169591 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:27:26.174108 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:27:26.178962 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:27:26.184499 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:27:26.189543 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:27:26.194390 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:27:26.199025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:27:26.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.205860 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:27:26.206108 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:27:26.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.211601 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:27:26.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.217249 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:27:26.217666 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:27:26.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.222961 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:27:26.223182 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:27:26.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.228231 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:27:26.228557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:27:26.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.233967 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:27:26.234187 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:27:26.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.239970 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:27:26.240125 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:27:26.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.245382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:27:26.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.250774 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:27:26.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.256976 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:27:26.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.262976 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:27:26.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.276906 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:27:26.282682 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 17:27:26.288783 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:27:26.299408 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:27:26.304168 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:27:26.304384 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:27:26.309991 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:27:26.315754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:27:26.315953 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:27:26.317631 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:27:26.334164 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:27:26.339712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:27:26.340966 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:27:26.345509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:27:26.348458 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:27:26.355233 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:27:26.363407 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:27:26.377510 systemd-journald[1595]: Time spent on flushing to /var/log/journal/102a6b23023d40739f60c3121faca739 is 13.011ms for 1069 entries. Jan 23 17:27:26.377510 systemd-journald[1595]: System Journal (/var/log/journal/102a6b23023d40739f60c3121faca739) is 8M, max 2.2G, 2.2G free. Jan 23 17:27:26.441531 systemd-journald[1595]: Received client request to flush runtime journal. Jan 23 17:27:26.441597 kernel: loop1: detected capacity change from 0 to 207008 Jan 23 17:27:26.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.372448 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:27:26.383426 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:27:26.391228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:27:26.399380 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:27:26.408183 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:27:26.417489 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:27:26.443620 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:27:26.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.467553 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:27:26.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.494016 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:27:26.496674 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:27:26.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.512344 kernel: loop2: detected capacity change from 0 to 45344 Jan 23 17:27:26.574797 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:27:26.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.580000 audit: BPF prog-id=18 op=LOAD Jan 23 17:27:26.580000 audit: BPF prog-id=19 op=LOAD Jan 23 17:27:26.580000 audit: BPF prog-id=20 op=LOAD Jan 23 17:27:26.582502 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 17:27:26.587000 audit: BPF prog-id=21 op=LOAD Jan 23 17:27:26.591443 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:27:26.599456 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:27:26.610000 audit: BPF prog-id=22 op=LOAD Jan 23 17:27:26.612000 audit: BPF prog-id=23 op=LOAD Jan 23 17:27:26.612000 audit: BPF prog-id=24 op=LOAD Jan 23 17:27:26.614520 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 17:27:26.620000 audit: BPF prog-id=25 op=LOAD Jan 23 17:27:26.620000 audit: BPF prog-id=26 op=LOAD Jan 23 17:27:26.620000 audit: BPF prog-id=27 op=LOAD Jan 23 17:27:26.624470 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:27:26.679155 systemd-nsresourced[1661]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 17:27:26.680534 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 17:27:26.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.688992 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:27:26.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.713057 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Jan 23 17:27:26.713073 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Jan 23 17:27:26.718562 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:27:26.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.767187 systemd-oomd[1658]: No swap; memory pressure usage will be degraded Jan 23 17:27:26.768720 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 17:27:26.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.796709 systemd-resolved[1659]: Positive Trust Anchors: Jan 23 17:27:26.796725 systemd-resolved[1659]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:27:26.796727 systemd-resolved[1659]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 17:27:26.796746 systemd-resolved[1659]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:27:26.890703 systemd-resolved[1659]: Using system hostname 'ci-4547.1.0-a-f00ee6181d'. Jan 23 17:27:26.892376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:27:26.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:26.900737 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:27:26.922299 kernel: loop3: detected capacity change from 0 to 27544 Jan 23 17:27:27.066247 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:27:27.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.074649 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 23 17:27:27.074695 kernel: audit: type=1130 audit(1769189247.070:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.075000 audit: BPF prog-id=8 op=UNLOAD Jan 23 17:27:27.092083 kernel: audit: type=1334 audit(1769189247.075:153): prog-id=8 op=UNLOAD Jan 23 17:27:27.075000 audit: BPF prog-id=7 op=UNLOAD Jan 23 17:27:27.095492 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:27:27.095987 kernel: audit: type=1334 audit(1769189247.075:154): prog-id=7 op=UNLOAD Jan 23 17:27:27.104693 kernel: audit: type=1334 audit(1769189247.087:155): prog-id=28 op=LOAD Jan 23 17:27:27.087000 audit: BPF prog-id=28 op=LOAD Jan 23 17:27:27.091000 audit: BPF prog-id=29 op=LOAD Jan 23 17:27:27.108746 kernel: audit: type=1334 audit(1769189247.091:156): prog-id=29 op=LOAD Jan 23 17:27:27.133789 systemd-udevd[1682]: Using default interface naming scheme 'v257'. Jan 23 17:27:27.376069 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:27:27.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.397439 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:27:27.407168 kernel: audit: type=1130 audit(1769189247.382:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.407291 kernel: audit: type=1334 audit(1769189247.394:158): prog-id=30 op=LOAD Jan 23 17:27:27.394000 audit: BPF prog-id=30 op=LOAD Jan 23 17:27:27.435314 kernel: loop4: detected capacity change from 0 to 100192 Jan 23 17:27:27.469417 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 17:27:27.520304 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 17:27:27.528702 systemd-networkd[1692]: lo: Link UP Jan 23 17:27:27.528713 systemd-networkd[1692]: lo: Gained carrier Jan 23 17:27:27.531138 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:27:27.537791 systemd-networkd[1692]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:27:27.537972 systemd-networkd[1692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:27:27.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.539648 systemd[1]: Reached target network.target - Network. Jan 23 17:27:27.555475 kernel: audit: type=1130 audit(1769189247.537:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.561035 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:27:27.572305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#246 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 17:27:27.578943 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:27:27.598328 kernel: hv_vmbus: registering driver hv_balloon Jan 23 17:27:27.598435 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 17:27:27.612076 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 17:27:27.612203 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 17:27:27.612225 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 17:27:27.616305 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 17:27:27.627583 kernel: Console: switching to colour dummy device 80x25 Jan 23 17:27:27.633492 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 17:27:27.639358 kernel: mlx5_core 326b:00:02.0 enP12907s1: Link up Jan 23 17:27:27.665320 kernel: hv_netvsc 00224877-1ce4-0022-4877-1ce400224877 eth0: Data path switched to VF: enP12907s1 Jan 23 17:27:27.666891 systemd-networkd[1692]: enP12907s1: Link UP Jan 23 17:27:27.669438 systemd-networkd[1692]: eth0: Link UP Jan 23 17:27:27.669445 systemd-networkd[1692]: eth0: Gained carrier Jan 23 17:27:27.669467 systemd-networkd[1692]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:27:27.670136 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:27:27.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.699292 kernel: audit: type=1130 audit(1769189247.678:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.699411 systemd-networkd[1692]: enP12907s1: Gained carrier Jan 23 17:27:27.719508 systemd-networkd[1692]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 17:27:27.723412 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:27:27.748458 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:27:27.748659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:27:27.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.766534 kernel: audit: type=1130 audit(1769189247.752:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:27.771295 kernel: MACsec IEEE 802.1AE Jan 23 17:27:27.771451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:27:27.859741 kernel: loop5: detected capacity change from 0 to 207008 Jan 23 17:27:27.883971 kernel: loop6: detected capacity change from 0 to 45344 Jan 23 17:27:27.886703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 17:27:27.893117 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:27:27.907521 kernel: loop7: detected capacity change from 0 to 27544 Jan 23 17:27:27.926411 kernel: loop1: detected capacity change from 0 to 100192 Jan 23 17:27:27.938084 (sd-merge)[1806]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 23 17:27:27.941744 (sd-merge)[1806]: Merged extensions into '/usr'. Jan 23 17:27:27.945500 systemd[1]: Reload requested from client PID 1640 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:27:27.945521 systemd[1]: Reloading... Jan 23 17:27:28.006526 zram_generator::config[1845]: No configuration found. Jan 23 17:27:28.187179 systemd[1]: Reloading finished in 241 ms. Jan 23 17:27:28.216774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:27:28.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.221997 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:27:28.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.228042 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:27:28.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.243487 systemd[1]: Starting ensure-sysext.service... Jan 23 17:27:28.247664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:27:28.255000 audit: BPF prog-id=31 op=LOAD Jan 23 17:27:28.262000 audit: BPF prog-id=32 op=LOAD Jan 23 17:27:28.262000 audit: BPF prog-id=28 op=UNLOAD Jan 23 17:27:28.262000 audit: BPF prog-id=29 op=UNLOAD Jan 23 17:27:28.262000 audit: BPF prog-id=33 op=LOAD Jan 23 17:27:28.262000 audit: BPF prog-id=30 op=UNLOAD Jan 23 17:27:28.263000 audit: BPF prog-id=34 op=LOAD Jan 23 17:27:28.263000 audit: BPF prog-id=15 op=UNLOAD Jan 23 17:27:28.263000 audit: BPF prog-id=35 op=LOAD Jan 23 17:27:28.263000 audit: BPF prog-id=36 op=LOAD Jan 23 17:27:28.263000 audit: BPF prog-id=16 op=UNLOAD Jan 23 17:27:28.263000 audit: BPF prog-id=17 op=UNLOAD Jan 23 17:27:28.264000 audit: BPF prog-id=37 op=LOAD Jan 23 17:27:28.264000 audit: BPF prog-id=22 op=UNLOAD Jan 23 17:27:28.264000 audit: BPF prog-id=38 op=LOAD Jan 23 17:27:28.264000 audit: BPF prog-id=39 op=LOAD Jan 23 17:27:28.264000 audit: BPF prog-id=23 op=UNLOAD Jan 23 17:27:28.264000 audit: BPF prog-id=24 op=UNLOAD Jan 23 17:27:28.264000 audit: BPF prog-id=40 op=LOAD Jan 23 17:27:28.264000 audit: BPF prog-id=18 op=UNLOAD Jan 23 17:27:28.264000 audit: BPF prog-id=41 op=LOAD Jan 23 17:27:28.264000 audit: BPF prog-id=42 op=LOAD Jan 23 17:27:28.264000 audit: BPF prog-id=19 op=UNLOAD Jan 23 17:27:28.264000 audit: BPF prog-id=20 op=UNLOAD Jan 23 17:27:28.265000 audit: BPF prog-id=43 op=LOAD Jan 23 17:27:28.265000 audit: BPF prog-id=21 op=UNLOAD Jan 23 17:27:28.265000 audit: BPF prog-id=44 op=LOAD Jan 23 17:27:28.265000 audit: BPF prog-id=25 op=UNLOAD Jan 23 17:27:28.265000 audit: BPF prog-id=45 op=LOAD Jan 23 17:27:28.265000 audit: BPF prog-id=46 op=LOAD Jan 23 17:27:28.265000 audit: BPF prog-id=26 op=UNLOAD Jan 23 17:27:28.265000 audit: BPF prog-id=27 op=UNLOAD Jan 23 17:27:28.271845 systemd[1]: Reload requested from client PID 1906 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:27:28.271859 systemd[1]: Reloading... Jan 23 17:27:28.281811 systemd-tmpfiles[1907]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:27:28.281834 systemd-tmpfiles[1907]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:27:28.282006 systemd-tmpfiles[1907]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:27:28.283438 systemd-tmpfiles[1907]: ACLs are not supported, ignoring. Jan 23 17:27:28.283581 systemd-tmpfiles[1907]: ACLs are not supported, ignoring. Jan 23 17:27:28.307674 systemd-tmpfiles[1907]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:27:28.307688 systemd-tmpfiles[1907]: Skipping /boot Jan 23 17:27:28.319172 systemd-tmpfiles[1907]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:27:28.319183 systemd-tmpfiles[1907]: Skipping /boot Jan 23 17:27:28.348313 zram_generator::config[1941]: No configuration found. Jan 23 17:27:28.515056 systemd[1]: Reloading finished in 242 ms. Jan 23 17:27:28.529000 audit: BPF prog-id=47 op=LOAD Jan 23 17:27:28.529000 audit: BPF prog-id=40 op=UNLOAD Jan 23 17:27:28.529000 audit: BPF prog-id=48 op=LOAD Jan 23 17:27:28.529000 audit: BPF prog-id=49 op=LOAD Jan 23 17:27:28.529000 audit: BPF prog-id=41 op=UNLOAD Jan 23 17:27:28.529000 audit: BPF prog-id=42 op=UNLOAD Jan 23 17:27:28.530000 audit: BPF prog-id=50 op=LOAD Jan 23 17:27:28.530000 audit: BPF prog-id=33 op=UNLOAD Jan 23 17:27:28.530000 audit: BPF prog-id=51 op=LOAD Jan 23 17:27:28.530000 audit: BPF prog-id=34 op=UNLOAD Jan 23 17:27:28.531000 audit: BPF prog-id=52 op=LOAD Jan 23 17:27:28.531000 audit: BPF prog-id=53 op=LOAD Jan 23 17:27:28.531000 audit: BPF prog-id=35 op=UNLOAD Jan 23 17:27:28.531000 audit: BPF prog-id=36 op=UNLOAD Jan 23 17:27:28.531000 audit: BPF prog-id=54 op=LOAD Jan 23 17:27:28.531000 audit: BPF prog-id=43 op=UNLOAD Jan 23 17:27:28.536000 audit: BPF prog-id=55 op=LOAD Jan 23 17:27:28.536000 audit: BPF prog-id=44 op=UNLOAD Jan 23 17:27:28.536000 audit: BPF prog-id=56 op=LOAD Jan 23 17:27:28.537000 audit: BPF prog-id=57 op=LOAD Jan 23 17:27:28.537000 audit: BPF prog-id=45 op=UNLOAD Jan 23 17:27:28.537000 audit: BPF prog-id=46 op=UNLOAD Jan 23 17:27:28.537000 audit: BPF prog-id=58 op=LOAD Jan 23 17:27:28.537000 audit: BPF prog-id=59 op=LOAD Jan 23 17:27:28.537000 audit: BPF prog-id=31 op=UNLOAD Jan 23 17:27:28.537000 audit: BPF prog-id=32 op=UNLOAD Jan 23 17:27:28.537000 audit: BPF prog-id=60 op=LOAD Jan 23 17:27:28.538000 audit: BPF prog-id=37 op=UNLOAD Jan 23 17:27:28.538000 audit: BPF prog-id=61 op=LOAD Jan 23 17:27:28.538000 audit: BPF prog-id=62 op=LOAD Jan 23 17:27:28.538000 audit: BPF prog-id=38 op=UNLOAD Jan 23 17:27:28.538000 audit: BPF prog-id=39 op=UNLOAD Jan 23 17:27:28.541265 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:27:28.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.554171 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:27:28.563222 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:27:28.571577 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:27:28.577154 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:27:28.590968 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:27:28.602075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:27:28.605610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:27:28.614548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:27:28.622789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:27:28.627587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:27:28.627785 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:27:28.627859 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:27:28.628884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:27:28.629207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:27:28.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.636199 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:27:28.636525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:27:28.636000 audit[2002]: SYSTEM_BOOT pid=2002 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.642778 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:27:28.642960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:27:28.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.652839 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:27:28.653003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:27:28.655470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:27:28.657532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:27:28.665538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:27:28.672715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:27:28.677859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:27:28.678046 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:27:28.678344 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:27:28.679463 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:27:28.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.685641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:27:28.685855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:27:28.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.691055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:27:28.691236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:27:28.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.697232 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:27:28.697604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:27:28.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.708897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:27:28.710194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:27:28.719501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:27:28.726528 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:27:28.735280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:27:28.740683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:27:28.740863 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:27:28.740936 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:27:28.741040 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:27:28.746476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:27:28.746713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:27:28.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.751867 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:27:28.752037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:27:28.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.756840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:27:28.757019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:27:28.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.763019 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:27:28.763198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:27:28.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.771150 systemd[1]: Finished ensure-sysext.service. Jan 23 17:27:28.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.777263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:27:28.777343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:27:28.858644 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:27:28.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:27:28.945000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 17:27:28.945000 audit[2045]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd7676840 a2=420 a3=0 items=0 ppid=1998 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:27:28.945000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:27:28.947056 augenrules[2045]: No rules Jan 23 17:27:28.948775 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:27:28.949184 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:27:29.186452 systemd-networkd[1692]: eth0: Gained IPv6LL Jan 23 17:27:29.189670 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:27:29.195377 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:27:29.315376 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:27:29.321213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:27:34.332807 ldconfig[2000]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:27:34.349469 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:27:34.356764 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:27:34.374657 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:27:34.379952 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:27:34.384725 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:27:34.389880 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:27:34.395414 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:27:34.399838 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:27:34.404945 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 17:27:34.410201 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 17:27:34.414737 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:27:34.419952 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:27:34.419988 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:27:34.424079 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:27:34.429413 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:27:34.435391 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:27:34.441010 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:27:34.446445 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:27:34.451592 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:27:34.467949 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:27:34.472902 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:27:34.478825 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:27:34.483433 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:27:34.487784 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:27:34.492030 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:27:34.492054 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:27:34.494634 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 17:27:34.507407 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:27:34.515247 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:27:34.523462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:27:34.530630 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:27:34.539431 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:27:34.546807 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:27:34.547320 chronyd[2058]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 17:27:34.551443 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:27:34.554491 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 17:27:34.558878 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 17:27:34.559954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:27:34.561572 KVP[2068]: KVP starting; pid is:2068 Jan 23 17:27:34.565355 chronyd[2058]: Timezone right/UTC failed leap second check, ignoring Jan 23 17:27:34.565528 chronyd[2058]: Loaded seccomp filter (level 2) Jan 23 17:27:34.569535 jq[2066]: false Jan 23 17:27:34.569523 KVP[2068]: KVP LIC Version: 3.1 Jan 23 17:27:34.568168 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:27:34.570337 kernel: hv_utils: KVP IC version 4.0 Jan 23 17:27:34.579890 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:27:34.589461 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:27:34.599188 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:27:34.606698 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:27:34.614805 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:27:34.619538 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:27:34.621410 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:27:34.622281 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:27:34.629080 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:27:34.634682 extend-filesystems[2067]: Found /dev/sda6 Jan 23 17:27:34.642464 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 17:27:34.646112 jq[2089]: true Jan 23 17:27:34.650183 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:27:34.656986 extend-filesystems[2067]: Found /dev/sda9 Jan 23 17:27:34.657535 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:27:34.659349 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:27:34.662178 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:27:34.662555 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:27:34.670098 extend-filesystems[2067]: Checking size of /dev/sda9 Jan 23 17:27:34.670651 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:27:34.671342 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:27:34.696754 update_engine[2086]: I20260123 17:27:34.696033 2086 main.cc:92] Flatcar Update Engine starting Jan 23 17:27:34.705840 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:27:34.721782 jq[2106]: true Jan 23 17:27:34.724375 extend-filesystems[2067]: Resized partition /dev/sda9 Jan 23 17:27:34.751974 extend-filesystems[2125]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:27:34.786442 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 23 17:27:34.786550 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 23 17:27:34.788036 systemd-logind[2081]: New seat seat0. Jan 23 17:27:34.798504 tar[2105]: linux-arm64/LICENSE Jan 23 17:27:34.824943 dbus-daemon[2061]: [system] SELinux support is enabled Jan 23 17:27:34.840108 update_engine[2086]: I20260123 17:27:34.831174 2086 update_check_scheduler.cc:74] Next update check in 7m55s Jan 23 17:27:34.825138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:27:34.833809 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:27:34.833856 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:27:34.840567 systemd-logind[2081]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 23 17:27:34.841617 tar[2105]: linux-arm64/helm Jan 23 17:27:34.843400 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:27:34.843423 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:27:34.852533 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:27:34.859562 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:27:34.870158 extend-filesystems[2125]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 17:27:34.870158 extend-filesystems[2125]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 23 17:27:34.870158 extend-filesystems[2125]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 23 17:27:34.958533 extend-filesystems[2067]: Resized filesystem in /dev/sda9 Jan 23 17:27:34.971679 bash[2149]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:27:34.890607 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:27:34.971840 coreos-metadata[2060]: Jan 23 17:27:34.939 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 17:27:34.971840 coreos-metadata[2060]: Jan 23 17:27:34.947 INFO Fetch successful Jan 23 17:27:34.971840 coreos-metadata[2060]: Jan 23 17:27:34.948 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 17:27:34.971840 coreos-metadata[2060]: Jan 23 17:27:34.952 INFO Fetch successful Jan 23 17:27:34.971840 coreos-metadata[2060]: Jan 23 17:27:34.953 INFO Fetching http://168.63.129.16/machine/1ef7f560-8392-4130-ae18-79b8c47cb7ba/6333b8ad%2D854e%2D4fde%2Db156%2D36d286d5eea1.%5Fci%2D4547.1.0%2Da%2Df00ee6181d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 17:27:34.971840 coreos-metadata[2060]: Jan 23 17:27:34.954 INFO Fetch successful Jan 23 17:27:34.971840 coreos-metadata[2060]: Jan 23 17:27:34.964 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 17:27:34.911082 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:27:34.911341 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:27:34.962359 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:27:34.978606 coreos-metadata[2060]: Jan 23 17:27:34.977 INFO Fetch successful Jan 23 17:27:34.984404 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 17:27:35.070663 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:27:35.076654 sshd_keygen[2094]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:27:35.078159 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:27:35.155378 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:27:35.171930 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:27:35.180541 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 17:27:35.214466 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:27:35.214722 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:27:35.220530 locksmithd[2168]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:27:35.227648 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:27:35.245525 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 17:27:35.254475 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:27:35.255486 containerd[2108]: time="2026-01-23T17:27:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:27:35.256885 containerd[2108]: time="2026-01-23T17:27:35.256850644Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 17:27:35.264456 containerd[2108]: time="2026-01-23T17:27:35.264410340Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.008µs" Jan 23 17:27:35.265566 containerd[2108]: time="2026-01-23T17:27:35.265531708Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:27:35.265723 containerd[2108]: time="2026-01-23T17:27:35.265708164Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:27:35.265773 containerd[2108]: time="2026-01-23T17:27:35.265761572Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:27:35.265977 containerd[2108]: time="2026-01-23T17:27:35.265958556Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:27:35.266034 containerd[2108]: time="2026-01-23T17:27:35.266023468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:27:35.266141 containerd[2108]: time="2026-01-23T17:27:35.266127180Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:27:35.266206 containerd[2108]: time="2026-01-23T17:27:35.266192116Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:27:35.266529 containerd[2108]: time="2026-01-23T17:27:35.266503892Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:27:35.266592 containerd[2108]: time="2026-01-23T17:27:35.266578836Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:27:35.266645 containerd[2108]: time="2026-01-23T17:27:35.266633484Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:27:35.266679 containerd[2108]: time="2026-01-23T17:27:35.266668332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 17:27:35.266889 containerd[2108]: time="2026-01-23T17:27:35.266869596Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 17:27:35.266947 containerd[2108]: time="2026-01-23T17:27:35.266935780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:27:35.267075 containerd[2108]: time="2026-01-23T17:27:35.267061004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:27:35.267315 containerd[2108]: time="2026-01-23T17:27:35.267300668Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:27:35.267999 containerd[2108]: time="2026-01-23T17:27:35.267978292Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:27:35.268061 containerd[2108]: time="2026-01-23T17:27:35.268048460Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:27:35.268132 containerd[2108]: time="2026-01-23T17:27:35.268121652Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:27:35.268372 containerd[2108]: time="2026-01-23T17:27:35.268355508Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:27:35.268543 containerd[2108]: time="2026-01-23T17:27:35.268523892Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:27:35.269160 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:27:35.279059 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 17:27:35.287847 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:27:35.301461 containerd[2108]: time="2026-01-23T17:27:35.301402436Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:27:35.301629 containerd[2108]: time="2026-01-23T17:27:35.301615012Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301801364Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301820940Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301833020Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301842964Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301851580Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301858108Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301866276Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301880148Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301888900Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301895556Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301901724Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.301910652Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:27:35.302243 containerd[2108]: time="2026-01-23T17:27:35.302055972Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302071932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302083332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302090164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302103820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302110724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302120452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302126292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302132580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302141948Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302148060Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302169588Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:27:35.302460 containerd[2108]: time="2026-01-23T17:27:35.302210788Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:27:35.302740 containerd[2108]: time="2026-01-23T17:27:35.302220916Z" level=info msg="Start snapshots syncer" Jan 23 17:27:35.303336 containerd[2108]: time="2026-01-23T17:27:35.302814940Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:27:35.303336 containerd[2108]: time="2026-01-23T17:27:35.303074900Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:27:35.303436 containerd[2108]: time="2026-01-23T17:27:35.303120836Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:27:35.303436 containerd[2108]: time="2026-01-23T17:27:35.303161964Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:27:35.303785 containerd[2108]: time="2026-01-23T17:27:35.303763804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:27:35.303892 containerd[2108]: time="2026-01-23T17:27:35.303879924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:27:35.303950 containerd[2108]: time="2026-01-23T17:27:35.303939652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:27:35.304001 containerd[2108]: time="2026-01-23T17:27:35.303991692Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:27:35.304112 containerd[2108]: time="2026-01-23T17:27:35.304085516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:27:35.304170 containerd[2108]: time="2026-01-23T17:27:35.304157428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:27:35.304231 containerd[2108]: time="2026-01-23T17:27:35.304220948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:27:35.304303 containerd[2108]: time="2026-01-23T17:27:35.304263924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:27:35.304381 containerd[2108]: time="2026-01-23T17:27:35.304339940Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:27:35.304445 containerd[2108]: time="2026-01-23T17:27:35.304432988Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304551772Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304565060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304572012Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304576860Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304583964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304590828Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304602964Z" level=info msg="runtime interface created" Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304606308Z" level=info msg="created NRI interface" Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304611348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304620916Z" level=info msg="Connect containerd service" Jan 23 17:27:35.304677 containerd[2108]: time="2026-01-23T17:27:35.304645788Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:27:35.306288 containerd[2108]: time="2026-01-23T17:27:35.305998148Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:27:35.443776 tar[2105]: linux-arm64/README.md Jan 23 17:27:35.455205 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:27:35.709063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:27:35.768518 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:27:35.812146 containerd[2108]: time="2026-01-23T17:27:35.812087348Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:27:35.812146 containerd[2108]: time="2026-01-23T17:27:35.812162492Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:27:35.812279 containerd[2108]: time="2026-01-23T17:27:35.812189588Z" level=info msg="Start subscribing containerd event" Jan 23 17:27:35.812279 containerd[2108]: time="2026-01-23T17:27:35.812221532Z" level=info msg="Start recovering state" Jan 23 17:27:35.812667 containerd[2108]: time="2026-01-23T17:27:35.812318356Z" level=info msg="Start event monitor" Jan 23 17:27:35.812667 containerd[2108]: time="2026-01-23T17:27:35.812329700Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:27:35.812667 containerd[2108]: time="2026-01-23T17:27:35.812335372Z" level=info msg="Start streaming server" Jan 23 17:27:35.812667 containerd[2108]: time="2026-01-23T17:27:35.812341900Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:27:35.812667 containerd[2108]: time="2026-01-23T17:27:35.812346460Z" level=info msg="runtime interface starting up..." Jan 23 17:27:35.812667 containerd[2108]: time="2026-01-23T17:27:35.812350228Z" level=info msg="starting plugins..." Jan 23 17:27:35.812667 containerd[2108]: time="2026-01-23T17:27:35.812360892Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:27:35.812659 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:27:35.818551 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:27:35.819486 containerd[2108]: time="2026-01-23T17:27:35.819450852Z" level=info msg="containerd successfully booted in 0.564351s" Jan 23 17:27:35.825385 systemd[1]: Startup finished in 3.074s (kernel) + 12.427s (initrd) + 13.952s (userspace) = 29.454s. Jan 23 17:27:36.120696 kubelet[2275]: E0123 17:27:36.120587 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:27:36.122827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:27:36.122941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:27:36.123304 systemd[1]: kubelet.service: Consumed 567ms CPU time, 256.7M memory peak. Jan 23 17:27:36.517730 login[2253]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:27:36.518129 login[2252]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:27:36.528709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:27:36.529939 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:27:36.531788 systemd-logind[2081]: New session 1 of user core. Jan 23 17:27:36.535997 systemd-logind[2081]: New session 2 of user core. Jan 23 17:27:36.546589 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:27:36.550413 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:27:36.560781 (systemd)[2290]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:27:36.563356 systemd-logind[2081]: New session 3 of user core. Jan 23 17:27:36.680241 systemd[2290]: Queued start job for default target default.target. Jan 23 17:27:36.690230 systemd[2290]: Created slice app.slice - User Application Slice. Jan 23 17:27:36.690289 systemd[2290]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 17:27:36.690300 systemd[2290]: Reached target paths.target - Paths. Jan 23 17:27:36.690352 systemd[2290]: Reached target timers.target - Timers. Jan 23 17:27:36.691523 systemd[2290]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:27:36.694452 systemd[2290]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 17:27:36.700121 systemd[2290]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:27:36.700180 systemd[2290]: Reached target sockets.target - Sockets. Jan 23 17:27:36.703966 systemd[2290]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 17:27:36.705311 systemd[2290]: Reached target basic.target - Basic System. Jan 23 17:27:36.705367 systemd[2290]: Reached target default.target - Main User Target. Jan 23 17:27:36.705389 systemd[2290]: Startup finished in 137ms. Jan 23 17:27:36.705624 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:27:36.714393 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:27:36.715138 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:27:37.010132 waagent[2248]: 2026-01-23T17:27:37.005901Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 17:27:37.010709 waagent[2248]: 2026-01-23T17:27:37.010659Z INFO Daemon Daemon OS: flatcar 4547.1.0 Jan 23 17:27:37.014139 waagent[2248]: 2026-01-23T17:27:37.014094Z INFO Daemon Daemon Python: 3.11.13 Jan 23 17:27:37.019364 waagent[2248]: 2026-01-23T17:27:37.019315Z INFO Daemon Daemon Run daemon Jan 23 17:27:37.022389 waagent[2248]: 2026-01-23T17:27:37.022352Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.1.0' Jan 23 17:27:37.028694 waagent[2248]: 2026-01-23T17:27:37.028628Z INFO Daemon Daemon Using waagent for provisioning Jan 23 17:27:37.033551 waagent[2248]: 2026-01-23T17:27:37.033498Z INFO Daemon Daemon Activate resource disk Jan 23 17:27:37.037501 waagent[2248]: 2026-01-23T17:27:37.037445Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 17:27:37.045824 waagent[2248]: 2026-01-23T17:27:37.045762Z INFO Daemon Daemon Found device: None Jan 23 17:27:37.049427 waagent[2248]: 2026-01-23T17:27:37.049370Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 17:27:37.055943 waagent[2248]: 2026-01-23T17:27:37.055887Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 17:27:37.065483 waagent[2248]: 2026-01-23T17:27:37.065435Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 17:27:37.070042 waagent[2248]: 2026-01-23T17:27:37.069993Z INFO Daemon Daemon Running default provisioning handler Jan 23 17:27:37.079306 waagent[2248]: 2026-01-23T17:27:37.079230Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 17:27:37.090133 waagent[2248]: 2026-01-23T17:27:37.090076Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 17:27:37.097644 waagent[2248]: 2026-01-23T17:27:37.097581Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 17:27:37.101490 waagent[2248]: 2026-01-23T17:27:37.101435Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 17:27:37.199563 waagent[2248]: 2026-01-23T17:27:37.199484Z INFO Daemon Daemon Successfully mounted dvd Jan 23 17:27:37.227597 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 17:27:37.229402 waagent[2248]: 2026-01-23T17:27:37.229250Z INFO Daemon Daemon Detect protocol endpoint Jan 23 17:27:37.232935 waagent[2248]: 2026-01-23T17:27:37.232873Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 17:27:37.237249 waagent[2248]: 2026-01-23T17:27:37.237201Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 17:27:37.242023 waagent[2248]: 2026-01-23T17:27:37.241984Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 17:27:37.246027 waagent[2248]: 2026-01-23T17:27:37.245983Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 17:27:37.249850 waagent[2248]: 2026-01-23T17:27:37.249812Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 17:27:37.262299 waagent[2248]: 2026-01-23T17:27:37.262202Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 17:27:37.267379 waagent[2248]: 2026-01-23T17:27:37.267355Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 17:27:37.271681 waagent[2248]: 2026-01-23T17:27:37.271646Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 17:27:37.389719 waagent[2248]: 2026-01-23T17:27:37.389622Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 17:27:37.395050 waagent[2248]: 2026-01-23T17:27:37.394993Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 17:27:37.402006 waagent[2248]: 2026-01-23T17:27:37.401962Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 17:27:37.420368 waagent[2248]: 2026-01-23T17:27:37.420330Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 17:27:37.424754 waagent[2248]: 2026-01-23T17:27:37.424715Z INFO Daemon Jan 23 17:27:37.427240 waagent[2248]: 2026-01-23T17:27:37.427205Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 95436a79-470b-40f9-93f0-3c905206d86e eTag: 2661009986166909493 source: Fabric] Jan 23 17:27:37.435700 waagent[2248]: 2026-01-23T17:27:37.435660Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 17:27:37.440665 waagent[2248]: 2026-01-23T17:27:37.440630Z INFO Daemon Jan 23 17:27:37.443085 waagent[2248]: 2026-01-23T17:27:37.443053Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 17:27:37.451411 waagent[2248]: 2026-01-23T17:27:37.451380Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 17:27:37.513170 waagent[2248]: 2026-01-23T17:27:37.513029Z INFO Daemon Downloaded certificate {'thumbprint': '11F9DD3EF9BAEADA3368A08709284D9B97F53B05', 'hasPrivateKey': True} Jan 23 17:27:37.520265 waagent[2248]: 2026-01-23T17:27:37.520217Z INFO Daemon Fetch goal state completed Jan 23 17:27:37.530843 waagent[2248]: 2026-01-23T17:27:37.530806Z INFO Daemon Daemon Starting provisioning Jan 23 17:27:37.534673 waagent[2248]: 2026-01-23T17:27:37.534632Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 17:27:37.538225 waagent[2248]: 2026-01-23T17:27:37.538193Z INFO Daemon Daemon Set hostname [ci-4547.1.0-a-f00ee6181d] Jan 23 17:27:37.544918 waagent[2248]: 2026-01-23T17:27:37.544866Z INFO Daemon Daemon Publish hostname [ci-4547.1.0-a-f00ee6181d] Jan 23 17:27:37.549715 waagent[2248]: 2026-01-23T17:27:37.549667Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 17:27:37.554729 waagent[2248]: 2026-01-23T17:27:37.554689Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 17:27:37.564798 systemd-networkd[1692]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:27:37.564807 systemd-networkd[1692]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:27:37.564891 systemd-networkd[1692]: eth0: DHCP lease lost Jan 23 17:27:37.580028 waagent[2248]: 2026-01-23T17:27:37.579955Z INFO Daemon Daemon Create user account if not exists Jan 23 17:27:37.584729 waagent[2248]: 2026-01-23T17:27:37.584670Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 17:27:37.589070 waagent[2248]: 2026-01-23T17:27:37.589019Z INFO Daemon Daemon Configure sudoer Jan 23 17:27:37.597358 systemd-networkd[1692]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 17:27:37.599437 waagent[2248]: 2026-01-23T17:27:37.599362Z INFO Daemon Daemon Configure sshd Jan 23 17:27:37.606319 waagent[2248]: 2026-01-23T17:27:37.606236Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 17:27:37.615797 waagent[2248]: 2026-01-23T17:27:37.615748Z INFO Daemon Daemon Deploy ssh public key. Jan 23 17:27:38.712910 waagent[2248]: 2026-01-23T17:27:38.712848Z INFO Daemon Daemon Provisioning complete Jan 23 17:27:38.726959 waagent[2248]: 2026-01-23T17:27:38.726908Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 17:27:38.731657 waagent[2248]: 2026-01-23T17:27:38.731601Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 17:27:38.738862 waagent[2248]: 2026-01-23T17:27:38.738811Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 17:27:38.846316 waagent[2343]: 2026-01-23T17:27:38.846097Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 17:27:38.846316 waagent[2343]: 2026-01-23T17:27:38.846243Z INFO ExtHandler ExtHandler OS: flatcar 4547.1.0 Jan 23 17:27:38.846316 waagent[2343]: 2026-01-23T17:27:38.846322Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 17:27:38.846670 waagent[2343]: 2026-01-23T17:27:38.846366Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 23 17:27:38.893746 waagent[2343]: 2026-01-23T17:27:38.893656Z INFO ExtHandler ExtHandler Distro: flatcar-4547.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 17:27:38.893918 waagent[2343]: 2026-01-23T17:27:38.893887Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 17:27:38.893960 waagent[2343]: 2026-01-23T17:27:38.893942Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 17:27:38.900129 waagent[2343]: 2026-01-23T17:27:38.900072Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 17:27:38.905366 waagent[2343]: 2026-01-23T17:27:38.905326Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 17:27:38.905849 waagent[2343]: 2026-01-23T17:27:38.905814Z INFO ExtHandler Jan 23 17:27:38.905906 waagent[2343]: 2026-01-23T17:27:38.905888Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2e4f7550-ad74-4cf5-a13d-14d0cd0067ba eTag: 2661009986166909493 source: Fabric] Jan 23 17:27:38.906149 waagent[2343]: 2026-01-23T17:27:38.906122Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 17:27:38.906614 waagent[2343]: 2026-01-23T17:27:38.906583Z INFO ExtHandler Jan 23 17:27:38.906656 waagent[2343]: 2026-01-23T17:27:38.906638Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 17:27:38.909779 waagent[2343]: 2026-01-23T17:27:38.909750Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 17:27:38.969955 waagent[2343]: 2026-01-23T17:27:38.969804Z INFO ExtHandler Downloaded certificate {'thumbprint': '11F9DD3EF9BAEADA3368A08709284D9B97F53B05', 'hasPrivateKey': True} Jan 23 17:27:38.970374 waagent[2343]: 2026-01-23T17:27:38.970340Z INFO ExtHandler Fetch goal state completed Jan 23 17:27:38.984749 waagent[2343]: 2026-01-23T17:27:38.984677Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.5-dev (Library: OpenSSL 3.5.5-dev ) Jan 23 17:27:38.988679 waagent[2343]: 2026-01-23T17:27:38.988612Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2343 Jan 23 17:27:38.988794 waagent[2343]: 2026-01-23T17:27:38.988766Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 17:27:38.989082 waagent[2343]: 2026-01-23T17:27:38.989054Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 17:27:38.990235 waagent[2343]: 2026-01-23T17:27:38.990197Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 17:27:38.990615 waagent[2343]: 2026-01-23T17:27:38.990583Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 17:27:38.990743 waagent[2343]: 2026-01-23T17:27:38.990718Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 17:27:38.991178 waagent[2343]: 2026-01-23T17:27:38.991145Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 17:27:39.057869 waagent[2343]: 2026-01-23T17:27:39.057827Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 17:27:39.058079 waagent[2343]: 2026-01-23T17:27:39.058046Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 17:27:39.064304 waagent[2343]: 2026-01-23T17:27:39.063826Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 17:27:39.069731 systemd[1]: Reload requested from client PID 2358 ('systemctl') (unit waagent.service)... Jan 23 17:27:39.069990 systemd[1]: Reloading... Jan 23 17:27:39.161302 zram_generator::config[2418]: No configuration found. Jan 23 17:27:39.308869 systemd[1]: Reloading finished in 238 ms. Jan 23 17:27:39.328997 waagent[2343]: 2026-01-23T17:27:39.328910Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 17:27:39.329109 waagent[2343]: 2026-01-23T17:27:39.329084Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 17:27:39.882048 waagent[2343]: 2026-01-23T17:27:39.881965Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 17:27:39.882384 waagent[2343]: 2026-01-23T17:27:39.882338Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 17:27:39.883099 waagent[2343]: 2026-01-23T17:27:39.883050Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 17:27:39.883492 waagent[2343]: 2026-01-23T17:27:39.883422Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 17:27:39.884304 waagent[2343]: 2026-01-23T17:27:39.883681Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 17:27:39.884304 waagent[2343]: 2026-01-23T17:27:39.883751Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 17:27:39.884304 waagent[2343]: 2026-01-23T17:27:39.883860Z INFO EnvHandler ExtHandler Configure routes Jan 23 17:27:39.884304 waagent[2343]: 2026-01-23T17:27:39.883901Z INFO EnvHandler ExtHandler Gateway:None Jan 23 17:27:39.884304 waagent[2343]: 2026-01-23T17:27:39.883925Z INFO EnvHandler ExtHandler Routes:None Jan 23 17:27:39.884702 waagent[2343]: 2026-01-23T17:27:39.884633Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 17:27:39.884751 waagent[2343]: 2026-01-23T17:27:39.884697Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 17:27:39.884944 waagent[2343]: 2026-01-23T17:27:39.884913Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 17:27:39.885016 waagent[2343]: 2026-01-23T17:27:39.884983Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 17:27:39.885092 waagent[2343]: 2026-01-23T17:27:39.885073Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 17:27:39.885371 waagent[2343]: 2026-01-23T17:27:39.885341Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 17:27:39.885962 waagent[2343]: 2026-01-23T17:27:39.885918Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 17:27:39.886008 waagent[2343]: 2026-01-23T17:27:39.885985Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 17:27:39.887694 waagent[2343]: 2026-01-23T17:27:39.887649Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 17:27:39.887694 waagent[2343]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 17:27:39.887694 waagent[2343]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 17:27:39.887694 waagent[2343]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 17:27:39.887694 waagent[2343]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 17:27:39.887694 waagent[2343]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 17:27:39.887694 waagent[2343]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 17:27:39.892927 waagent[2343]: 2026-01-23T17:27:39.892876Z INFO ExtHandler ExtHandler Jan 23 17:27:39.893031 waagent[2343]: 2026-01-23T17:27:39.892956Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 460ea917-c21e-4707-92d8-39dfad1a15e0 correlation 6e77e131-dbdf-45e4-89d6-9a14c25eba16 created: 2026-01-23T17:26:39.930801Z] Jan 23 17:27:39.893290 waagent[2343]: 2026-01-23T17:27:39.893243Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 17:27:39.893718 waagent[2343]: 2026-01-23T17:27:39.893683Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 23 17:27:39.922440 waagent[2343]: 2026-01-23T17:27:39.922373Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 17:27:39.922440 waagent[2343]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 17:27:39.922827 waagent[2343]: 2026-01-23T17:27:39.922790Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EBB0BF98-1CD7-47B0-A2D0-4753239C4667;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 17:27:39.950653 waagent[2343]: 2026-01-23T17:27:39.950232Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 17:27:39.950653 waagent[2343]: Executing ['ip', '-a', '-o', 'link']: Jan 23 17:27:39.950653 waagent[2343]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 17:27:39.950653 waagent[2343]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:1c:e4 brd ff:ff:ff:ff:ff:ff\ altname enx002248771ce4 Jan 23 17:27:39.950653 waagent[2343]: 3: enP12907s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:77:1c:e4 brd ff:ff:ff:ff:ff:ff\ altname enP12907p0s2 Jan 23 17:27:39.950653 waagent[2343]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 17:27:39.950653 waagent[2343]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 17:27:39.950653 waagent[2343]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 17:27:39.950653 waagent[2343]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 17:27:39.950653 waagent[2343]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 17:27:39.950653 waagent[2343]: 2: eth0 inet6 fe80::222:48ff:fe77:1ce4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 17:27:39.997379 waagent[2343]: 2026-01-23T17:27:39.997316Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 17:27:39.997379 waagent[2343]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:27:39.997379 waagent[2343]: pkts bytes target prot opt in out source destination Jan 23 17:27:39.997379 waagent[2343]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:27:39.997379 waagent[2343]: pkts bytes target prot opt in out source destination Jan 23 17:27:39.997379 waagent[2343]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:27:39.997379 waagent[2343]: pkts bytes target prot opt in out source destination Jan 23 17:27:39.997379 waagent[2343]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 17:27:39.997379 waagent[2343]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 17:27:39.997379 waagent[2343]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 17:27:40.000133 waagent[2343]: 2026-01-23T17:27:40.000091Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 17:27:40.000133 waagent[2343]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:27:40.000133 waagent[2343]: pkts bytes target prot opt in out source destination Jan 23 17:27:40.000133 waagent[2343]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:27:40.000133 waagent[2343]: pkts bytes target prot opt in out source destination Jan 23 17:27:40.000133 waagent[2343]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:27:40.000133 waagent[2343]: pkts bytes target prot opt in out source destination Jan 23 17:27:40.000133 waagent[2343]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 17:27:40.000133 waagent[2343]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 17:27:40.000133 waagent[2343]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 17:27:40.000613 waagent[2343]: 2026-01-23T17:27:40.000588Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 17:27:46.211372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:27:46.212688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:27:47.744119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:27:47.751535 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:27:47.779443 kubelet[2496]: E0123 17:27:47.779361 2496 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:27:47.781845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:27:47.781968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:27:47.782405 systemd[1]: kubelet.service: Consumed 121ms CPU time, 105.1M memory peak. Jan 23 17:27:56.027812 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:27:56.028901 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:54148.service - OpenSSH per-connection server daemon (10.200.16.10:54148). Jan 23 17:27:56.612650 sshd[2503]: Accepted publickey for core from 10.200.16.10 port 54148 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:27:56.613792 sshd-session[2503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:27:56.617724 systemd-logind[2081]: New session 4 of user core. Jan 23 17:27:56.624453 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:27:56.934457 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:54152.service - OpenSSH per-connection server daemon (10.200.16.10:54152). Jan 23 17:27:57.356119 sshd[2510]: Accepted publickey for core from 10.200.16.10 port 54152 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:27:57.356965 sshd-session[2510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:27:57.360976 systemd-logind[2081]: New session 5 of user core. Jan 23 17:27:57.368691 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:27:57.590155 sshd[2514]: Connection closed by 10.200.16.10 port 54152 Jan 23 17:27:57.590772 sshd-session[2510]: pam_unix(sshd:session): session closed for user core Jan 23 17:27:57.594929 systemd-logind[2081]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:27:57.595564 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:27:57.596743 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:54152.service: Deactivated successfully. Jan 23 17:27:57.600552 systemd-logind[2081]: Removed session 5. Jan 23 17:27:57.690443 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:54168.service - OpenSSH per-connection server daemon (10.200.16.10:54168). Jan 23 17:27:57.959739 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:27:57.962127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:27:58.075695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:27:58.082611 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:27:58.113398 sshd[2520]: Accepted publickey for core from 10.200.16.10 port 54168 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:27:58.114535 sshd-session[2520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:27:58.118800 systemd-logind[2081]: New session 6 of user core. Jan 23 17:27:58.129481 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:27:58.205381 kubelet[2531]: E0123 17:27:58.205302 2531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:27:58.207218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:27:58.207355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:27:58.207952 systemd[1]: kubelet.service: Consumed 112ms CPU time, 105M memory peak. Jan 23 17:27:58.344398 sshd[2537]: Connection closed by 10.200.16.10 port 54168 Jan 23 17:27:58.345191 sshd-session[2520]: pam_unix(sshd:session): session closed for user core Jan 23 17:27:58.349601 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:54168.service: Deactivated successfully. Jan 23 17:27:58.351506 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:27:58.352967 systemd-logind[2081]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:27:58.354110 systemd-logind[2081]: Removed session 6. Jan 23 17:27:58.356360 chronyd[2058]: Selected source PHC0 Jan 23 17:27:58.432555 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:54184.service - OpenSSH per-connection server daemon (10.200.16.10:54184). Jan 23 17:27:58.855172 sshd[2544]: Accepted publickey for core from 10.200.16.10 port 54184 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:27:58.856042 sshd-session[2544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:27:58.860322 systemd-logind[2081]: New session 7 of user core. Jan 23 17:27:58.869659 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:27:59.607937 sshd[2548]: Connection closed by 10.200.16.10 port 54184 Jan 23 17:27:59.608728 sshd-session[2544]: pam_unix(sshd:session): session closed for user core Jan 23 17:27:59.611973 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:54184.service: Deactivated successfully. Jan 23 17:27:59.613529 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:27:59.614809 systemd-logind[2081]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:27:59.616171 systemd-logind[2081]: Removed session 7. Jan 23 17:27:59.700217 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:45738.service - OpenSSH per-connection server daemon (10.200.16.10:45738). Jan 23 17:28:00.121958 sshd[2554]: Accepted publickey for core from 10.200.16.10 port 45738 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:28:00.122795 sshd-session[2554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:28:00.126752 systemd-logind[2081]: New session 8 of user core. Jan 23 17:28:00.137451 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:28:00.403997 sudo[2559]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:28:00.404249 sudo[2559]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:28:00.429714 sudo[2559]: pam_unix(sudo:session): session closed for user root Jan 23 17:28:00.508544 sshd[2558]: Connection closed by 10.200.16.10 port 45738 Jan 23 17:28:00.507327 sshd-session[2554]: pam_unix(sshd:session): session closed for user core Jan 23 17:28:00.511485 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:45738.service: Deactivated successfully. Jan 23 17:28:00.513249 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:28:00.514760 systemd-logind[2081]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:28:00.515610 systemd-logind[2081]: Removed session 8. Jan 23 17:28:00.593422 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:45740.service - OpenSSH per-connection server daemon (10.200.16.10:45740). Jan 23 17:28:00.986245 sshd[2566]: Accepted publickey for core from 10.200.16.10 port 45740 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:28:00.987409 sshd-session[2566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:28:00.991227 systemd-logind[2081]: New session 9 of user core. Jan 23 17:28:00.999678 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:28:01.134689 sudo[2572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:28:01.134912 sudo[2572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:28:01.143262 sudo[2572]: pam_unix(sudo:session): session closed for user root Jan 23 17:28:01.148776 sudo[2571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:28:01.148988 sudo[2571]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:28:01.155402 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:28:01.184000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 17:28:01.185467 augenrules[2596]: No rules Jan 23 17:28:01.187825 kernel: kauditd_printk_skb: 96 callbacks suppressed Jan 23 17:28:01.187900 kernel: audit: type=1305 audit(1769189281.184:256): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 17:28:01.189623 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:28:01.192316 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:28:01.196115 sudo[2571]: pam_unix(sudo:session): session closed for user root Jan 23 17:28:01.184000 audit[2596]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2c3ebb0 a2=420 a3=0 items=0 ppid=2577 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:01.214086 kernel: audit: type=1300 audit(1769189281.184:256): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2c3ebb0 a2=420 a3=0 items=0 ppid=2577 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:01.214200 kernel: audit: type=1327 audit(1769189281.184:256): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:28:01.184000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:28:01.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.233610 kernel: audit: type=1130 audit(1769189281.187:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.233648 kernel: audit: type=1131 audit(1769189281.187:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.195000 audit[2571]: USER_END pid=2571 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.258573 kernel: audit: type=1106 audit(1769189281.195:259): pid=2571 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.195000 audit[2571]: CRED_DISP pid=2571 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.271027 kernel: audit: type=1104 audit(1769189281.195:260): pid=2571 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.271113 sshd[2570]: Connection closed by 10.200.16.10 port 45740 Jan 23 17:28:01.271207 sshd-session[2566]: pam_unix(sshd:session): session closed for user core Jan 23 17:28:01.272000 audit[2566]: USER_END pid=2566 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:01.281022 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:45740.service: Deactivated successfully. Jan 23 17:28:01.283557 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:28:01.277000 audit[2566]: CRED_DISP pid=2566 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:01.304208 kernel: audit: type=1106 audit(1769189281.272:261): pid=2566 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:01.304327 kernel: audit: type=1104 audit(1769189281.277:262): pid=2566 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:01.304288 systemd-logind[2081]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:28:01.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.34:22-10.200.16.10:45740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.317587 kernel: audit: type=1131 audit(1769189281.277:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.34:22-10.200.16.10:45740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.318789 systemd-logind[2081]: Removed session 9. Jan 23 17:28:01.361244 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:45742.service - OpenSSH per-connection server daemon (10.200.16.10:45742). Jan 23 17:28:01.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.34:22-10.200.16.10:45742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.782000 audit[2605]: USER_ACCT pid=2605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:01.783067 sshd[2605]: Accepted publickey for core from 10.200.16.10 port 45742 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:28:01.783000 audit[2605]: CRED_ACQ pid=2605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:01.784000 audit[2605]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd1722ee0 a2=3 a3=0 items=0 ppid=1 pid=2605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:01.784000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:28:01.784655 sshd-session[2605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:28:01.788508 systemd-logind[2081]: New session 10 of user core. Jan 23 17:28:01.795664 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:28:01.797000 audit[2605]: USER_START pid=2605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:01.799000 audit[2609]: CRED_ACQ pid=2609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:01.942000 audit[2610]: USER_ACCT pid=2610 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.943221 sudo[2610]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:28:01.943000 audit[2610]: CRED_REFR pid=2610 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.943000 audit[2610]: USER_START pid=2610 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:01.943967 sudo[2610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:28:03.179537 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:28:03.189756 (dockerd)[2630]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:28:04.069299 dockerd[2630]: time="2026-01-23T17:28:04.068865895Z" level=info msg="Starting up" Jan 23 17:28:04.069768 dockerd[2630]: time="2026-01-23T17:28:04.069708439Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:28:04.078811 dockerd[2630]: time="2026-01-23T17:28:04.078767479Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:28:04.162556 dockerd[2630]: time="2026-01-23T17:28:04.162504039Z" level=info msg="Loading containers: start." Jan 23 17:28:04.193295 kernel: Initializing XFRM netlink socket Jan 23 17:28:04.257000 audit[2677]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.257000 audit[2677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcc5ec560 a2=0 a3=0 items=0 ppid=2630 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.257000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 17:28:04.259000 audit[2679]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.259000 audit[2679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffce97e380 a2=0 a3=0 items=0 ppid=2630 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.259000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 17:28:04.260000 audit[2681]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.260000 audit[2681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff92c3100 a2=0 a3=0 items=0 ppid=2630 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 17:28:04.262000 audit[2683]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.262000 audit[2683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd790800 a2=0 a3=0 items=0 ppid=2630 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 17:28:04.264000 audit[2685]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.264000 audit[2685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5997e80 a2=0 a3=0 items=0 ppid=2630 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.264000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 17:28:04.266000 audit[2687]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.266000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe7b56250 a2=0 a3=0 items=0 ppid=2630 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.266000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:28:04.268000 audit[2689]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.268000 audit[2689]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff78f2d10 a2=0 a3=0 items=0 ppid=2630 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:28:04.270000 audit[2691]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.270000 audit[2691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc6b00f00 a2=0 a3=0 items=0 ppid=2630 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 17:28:04.327000 audit[2694]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.327000 audit[2694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffe0d71420 a2=0 a3=0 items=0 ppid=2630 pid=2694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 17:28:04.328000 audit[2696]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.328000 audit[2696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc5528f20 a2=0 a3=0 items=0 ppid=2630 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 17:28:04.330000 audit[2698]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.330000 audit[2698]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd1d1dde0 a2=0 a3=0 items=0 ppid=2630 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.330000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 17:28:04.332000 audit[2700]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.332000 audit[2700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd24586b0 a2=0 a3=0 items=0 ppid=2630 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:28:04.333000 audit[2702]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.333000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffd5870a0 a2=0 a3=0 items=0 ppid=2630 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 17:28:04.448000 audit[2732]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.448000 audit[2732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe94f7f80 a2=0 a3=0 items=0 ppid=2630 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 17:28:04.450000 audit[2734]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.450000 audit[2734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffdfb78980 a2=0 a3=0 items=0 ppid=2630 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.450000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 17:28:04.452000 audit[2736]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.452000 audit[2736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffdb8a720 a2=0 a3=0 items=0 ppid=2630 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 17:28:04.454000 audit[2738]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.454000 audit[2738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd88a5340 a2=0 a3=0 items=0 ppid=2630 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.454000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 17:28:04.455000 audit[2740]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.455000 audit[2740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffffd32b0 a2=0 a3=0 items=0 ppid=2630 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.455000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 17:28:04.457000 audit[2742]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.457000 audit[2742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffeaf6a00 a2=0 a3=0 items=0 ppid=2630 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.457000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:28:04.459000 audit[2744]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2744 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.459000 audit[2744]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe1ab10d0 a2=0 a3=0 items=0 ppid=2630 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.459000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:28:04.461000 audit[2746]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2746 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.461000 audit[2746]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffc4be380 a2=0 a3=0 items=0 ppid=2630 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 17:28:04.463000 audit[2748]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2748 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.463000 audit[2748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd2121d40 a2=0 a3=0 items=0 ppid=2630 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 17:28:04.465000 audit[2750]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2750 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.465000 audit[2750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcb8850a0 a2=0 a3=0 items=0 ppid=2630 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.465000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 17:28:04.466000 audit[2752]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.466000 audit[2752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdd25cfd0 a2=0 a3=0 items=0 ppid=2630 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.466000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 17:28:04.468000 audit[2754]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.468000 audit[2754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc807cf60 a2=0 a3=0 items=0 ppid=2630 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.468000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:28:04.470000 audit[2756]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.470000 audit[2756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcf764bd0 a2=0 a3=0 items=0 ppid=2630 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.470000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 17:28:04.474000 audit[2761]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.474000 audit[2761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd9ddbb90 a2=0 a3=0 items=0 ppid=2630 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.474000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 17:28:04.476000 audit[2763]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2763 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.476000 audit[2763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffa2c3810 a2=0 a3=0 items=0 ppid=2630 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 17:28:04.477000 audit[2765]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2765 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.477000 audit[2765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdadb7b40 a2=0 a3=0 items=0 ppid=2630 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 17:28:04.479000 audit[2767]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.479000 audit[2767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdb60ab30 a2=0 a3=0 items=0 ppid=2630 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.479000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 17:28:04.481000 audit[2769]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2769 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.481000 audit[2769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc8c5f450 a2=0 a3=0 items=0 ppid=2630 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.481000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 17:28:04.483000 audit[2771]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:04.483000 audit[2771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe4f17130 a2=0 a3=0 items=0 ppid=2630 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 17:28:04.550000 audit[2776]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.550000 audit[2776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffe3c001e0 a2=0 a3=0 items=0 ppid=2630 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.550000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 17:28:04.552000 audit[2778]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.552000 audit[2778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcac3e610 a2=0 a3=0 items=0 ppid=2630 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.552000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 17:28:04.560000 audit[2786]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.560000 audit[2786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffe9aa3880 a2=0 a3=0 items=0 ppid=2630 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.560000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 17:28:04.564000 audit[2791]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.564000 audit[2791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc41d61c0 a2=0 a3=0 items=0 ppid=2630 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.564000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 17:28:04.566000 audit[2793]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.566000 audit[2793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff3852840 a2=0 a3=0 items=0 ppid=2630 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.566000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 17:28:04.568000 audit[2795]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2795 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.568000 audit[2795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdf303160 a2=0 a3=0 items=0 ppid=2630 pid=2795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.568000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 17:28:04.570000 audit[2797]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2797 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.570000 audit[2797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffcd519330 a2=0 a3=0 items=0 ppid=2630 pid=2797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.570000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:28:04.571000 audit[2799]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:04.571000 audit[2799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd8fd2780 a2=0 a3=0 items=0 ppid=2630 pid=2799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:04.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 17:28:04.573017 systemd-networkd[1692]: docker0: Link UP Jan 23 17:28:04.592711 dockerd[2630]: time="2026-01-23T17:28:04.592588591Z" level=info msg="Loading containers: done." Jan 23 17:28:04.603960 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4064918203-merged.mount: Deactivated successfully. Jan 23 17:28:04.669077 dockerd[2630]: time="2026-01-23T17:28:04.668707959Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:28:04.669077 dockerd[2630]: time="2026-01-23T17:28:04.668804703Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:28:04.669077 dockerd[2630]: time="2026-01-23T17:28:04.668920311Z" level=info msg="Initializing buildkit" Jan 23 17:28:04.720755 dockerd[2630]: time="2026-01-23T17:28:04.720707367Z" level=info msg="Completed buildkit initialization" Jan 23 17:28:04.725510 dockerd[2630]: time="2026-01-23T17:28:04.725451455Z" level=info msg="Daemon has completed initialization" Jan 23 17:28:04.726237 dockerd[2630]: time="2026-01-23T17:28:04.725514447Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:28:04.725878 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:28:04.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:05.587043 containerd[2108]: time="2026-01-23T17:28:05.587005375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 17:28:06.734576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2383933154.mount: Deactivated successfully. Jan 23 17:28:07.751922 containerd[2108]: time="2026-01-23T17:28:07.751858136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:07.758521 containerd[2108]: time="2026-01-23T17:28:07.758462725Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=25624485" Jan 23 17:28:07.763258 containerd[2108]: time="2026-01-23T17:28:07.763074337Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:07.768276 containerd[2108]: time="2026-01-23T17:28:07.768229419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:07.768980 containerd[2108]: time="2026-01-23T17:28:07.768831813Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.181788701s" Jan 23 17:28:07.768980 containerd[2108]: time="2026-01-23T17:28:07.768866351Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 17:28:07.769523 containerd[2108]: time="2026-01-23T17:28:07.769496562Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 17:28:08.209740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:28:08.211049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:28:08.321753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:28:08.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:08.324957 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 23 17:28:08.325050 kernel: audit: type=1130 audit(1769189288.320:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:08.347923 (kubelet)[2905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:28:08.461465 kubelet[2905]: E0123 17:28:08.461345 2905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:28:08.463441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:28:08.463669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:28:08.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:28:08.464491 systemd[1]: kubelet.service: Consumed 115ms CPU time, 105.3M memory peak. Jan 23 17:28:08.477322 kernel: audit: type=1131 audit(1769189288.462:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:28:09.620338 containerd[2108]: time="2026-01-23T17:28:09.620266150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:09.624006 containerd[2108]: time="2026-01-23T17:28:09.623775015Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 23 17:28:09.627997 containerd[2108]: time="2026-01-23T17:28:09.627966294Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:09.633880 containerd[2108]: time="2026-01-23T17:28:09.633831033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:09.634593 containerd[2108]: time="2026-01-23T17:28:09.634561443Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.865037446s" Jan 23 17:28:09.634688 containerd[2108]: time="2026-01-23T17:28:09.634675091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 17:28:09.635263 containerd[2108]: time="2026-01-23T17:28:09.635245058Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 17:28:10.892311 containerd[2108]: time="2026-01-23T17:28:10.892119902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:10.897587 containerd[2108]: time="2026-01-23T17:28:10.897529649Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 23 17:28:10.901963 containerd[2108]: time="2026-01-23T17:28:10.901926127Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:10.907297 containerd[2108]: time="2026-01-23T17:28:10.907241284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:10.907958 containerd[2108]: time="2026-01-23T17:28:10.907648976Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.272282965s" Jan 23 17:28:10.907958 containerd[2108]: time="2026-01-23T17:28:10.907675922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 17:28:10.908759 containerd[2108]: time="2026-01-23T17:28:10.908726730Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 17:28:12.230981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696349212.mount: Deactivated successfully. Jan 23 17:28:12.516347 containerd[2108]: time="2026-01-23T17:28:12.515784708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:12.523288 containerd[2108]: time="2026-01-23T17:28:12.523220722Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=17714360" Jan 23 17:28:12.527096 containerd[2108]: time="2026-01-23T17:28:12.527052129Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:12.533069 containerd[2108]: time="2026-01-23T17:28:12.532432299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:12.533069 containerd[2108]: time="2026-01-23T17:28:12.532672251Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.623908063s" Jan 23 17:28:12.533069 containerd[2108]: time="2026-01-23T17:28:12.532699701Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 17:28:12.533367 containerd[2108]: time="2026-01-23T17:28:12.533346698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 17:28:13.297348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4172512925.mount: Deactivated successfully. Jan 23 17:28:14.247300 containerd[2108]: time="2026-01-23T17:28:14.247079822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:14.251613 containerd[2108]: time="2026-01-23T17:28:14.251559602Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956357" Jan 23 17:28:14.255266 containerd[2108]: time="2026-01-23T17:28:14.255219445Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:14.261301 containerd[2108]: time="2026-01-23T17:28:14.260934325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:14.261454 containerd[2108]: time="2026-01-23T17:28:14.261431880Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.727947309s" Jan 23 17:28:14.261522 containerd[2108]: time="2026-01-23T17:28:14.261508821Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 17:28:14.262237 containerd[2108]: time="2026-01-23T17:28:14.262217590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:28:14.860708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount338378613.mount: Deactivated successfully. Jan 23 17:28:14.888651 containerd[2108]: time="2026-01-23T17:28:14.888598005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:28:14.894121 containerd[2108]: time="2026-01-23T17:28:14.894066770Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:28:14.898591 containerd[2108]: time="2026-01-23T17:28:14.898537386Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:28:14.904347 containerd[2108]: time="2026-01-23T17:28:14.904292091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:28:14.905023 containerd[2108]: time="2026-01-23T17:28:14.904696607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 642.366953ms" Jan 23 17:28:14.905023 containerd[2108]: time="2026-01-23T17:28:14.904727593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:28:14.905303 containerd[2108]: time="2026-01-23T17:28:14.905261214Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 17:28:15.682812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2027432086.mount: Deactivated successfully. Jan 23 17:28:15.773607 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 17:28:17.667402 containerd[2108]: time="2026-01-23T17:28:17.667343217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:17.671872 containerd[2108]: time="2026-01-23T17:28:17.671816393Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56574427" Jan 23 17:28:17.675017 containerd[2108]: time="2026-01-23T17:28:17.674984550Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:17.680238 containerd[2108]: time="2026-01-23T17:28:17.680180552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:17.680808 containerd[2108]: time="2026-01-23T17:28:17.680783042Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.775369089s" Jan 23 17:28:17.680808 containerd[2108]: time="2026-01-23T17:28:17.680812348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 17:28:18.493917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 17:28:18.495632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:28:18.622532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:28:18.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:18.639297 kernel: audit: type=1130 audit(1769189298.621:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:18.647617 (kubelet)[3066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:28:18.717843 kubelet[3066]: E0123 17:28:18.717769 3066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:28:18.721565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:28:18.721803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:28:18.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:28:18.722308 systemd[1]: kubelet.service: Consumed 113ms CPU time, 105.4M memory peak. Jan 23 17:28:18.735295 kernel: audit: type=1131 audit(1769189298.721:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:28:20.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:20.366941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:28:20.367077 systemd[1]: kubelet.service: Consumed 113ms CPU time, 105.4M memory peak. Jan 23 17:28:20.375556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:28:20.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:20.387803 update_engine[2086]: I20260123 17:28:20.387322 2086 update_attempter.cc:509] Updating boot flags... Jan 23 17:28:20.393400 kernel: audit: type=1130 audit(1769189300.365:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:20.393467 kernel: audit: type=1131 audit(1769189300.365:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:20.419776 systemd[1]: Reload requested from client PID 3087 ('systemctl') (unit session-10.scope)... Jan 23 17:28:20.419785 systemd[1]: Reloading... Jan 23 17:28:20.548301 zram_generator::config[3159]: No configuration found. Jan 23 17:28:20.711467 systemd[1]: Reloading finished in 291 ms. Jan 23 17:28:20.739000 audit: BPF prog-id=87 op=LOAD Jan 23 17:28:20.739000 audit: BPF prog-id=73 op=UNLOAD Jan 23 17:28:20.749765 kernel: audit: type=1334 audit(1769189300.739:320): prog-id=87 op=LOAD Jan 23 17:28:20.749829 kernel: audit: type=1334 audit(1769189300.739:321): prog-id=73 op=UNLOAD Jan 23 17:28:20.739000 audit: BPF prog-id=88 op=LOAD Jan 23 17:28:20.760539 kernel: audit: type=1334 audit(1769189300.739:322): prog-id=88 op=LOAD Jan 23 17:28:20.739000 audit: BPF prog-id=89 op=LOAD Jan 23 17:28:20.765122 kernel: audit: type=1334 audit(1769189300.739:323): prog-id=89 op=LOAD Jan 23 17:28:20.739000 audit: BPF prog-id=74 op=UNLOAD Jan 23 17:28:20.773111 kernel: audit: type=1334 audit(1769189300.739:324): prog-id=74 op=UNLOAD Jan 23 17:28:20.780386 kernel: audit: type=1334 audit(1769189300.739:325): prog-id=75 op=UNLOAD Jan 23 17:28:20.739000 audit: BPF prog-id=75 op=UNLOAD Jan 23 17:28:20.739000 audit: BPF prog-id=90 op=LOAD Jan 23 17:28:20.739000 audit: BPF prog-id=91 op=LOAD Jan 23 17:28:20.739000 audit: BPF prog-id=85 op=UNLOAD Jan 23 17:28:20.739000 audit: BPF prog-id=86 op=UNLOAD Jan 23 17:28:20.744000 audit: BPF prog-id=92 op=LOAD Jan 23 17:28:20.744000 audit: BPF prog-id=84 op=UNLOAD Jan 23 17:28:20.764000 audit: BPF prog-id=93 op=LOAD Jan 23 17:28:20.764000 audit: BPF prog-id=77 op=UNLOAD Jan 23 17:28:20.765000 audit: BPF prog-id=94 op=LOAD Jan 23 17:28:20.765000 audit: BPF prog-id=95 op=LOAD Jan 23 17:28:20.765000 audit: BPF prog-id=78 op=UNLOAD Jan 23 17:28:20.765000 audit: BPF prog-id=79 op=UNLOAD Jan 23 17:28:20.765000 audit: BPF prog-id=96 op=LOAD Jan 23 17:28:20.765000 audit: BPF prog-id=67 op=UNLOAD Jan 23 17:28:20.765000 audit: BPF prog-id=97 op=LOAD Jan 23 17:28:20.765000 audit: BPF prog-id=98 op=LOAD Jan 23 17:28:20.765000 audit: BPF prog-id=68 op=UNLOAD Jan 23 17:28:20.765000 audit: BPF prog-id=69 op=UNLOAD Jan 23 17:28:20.766000 audit: BPF prog-id=99 op=LOAD Jan 23 17:28:20.766000 audit: BPF prog-id=70 op=UNLOAD Jan 23 17:28:20.766000 audit: BPF prog-id=100 op=LOAD Jan 23 17:28:20.766000 audit: BPF prog-id=101 op=LOAD Jan 23 17:28:20.766000 audit: BPF prog-id=71 op=UNLOAD Jan 23 17:28:20.766000 audit: BPF prog-id=72 op=UNLOAD Jan 23 17:28:20.766000 audit: BPF prog-id=102 op=LOAD Jan 23 17:28:20.766000 audit: BPF prog-id=83 op=UNLOAD Jan 23 17:28:20.767000 audit: BPF prog-id=103 op=LOAD Jan 23 17:28:20.767000 audit: BPF prog-id=80 op=UNLOAD Jan 23 17:28:20.767000 audit: BPF prog-id=104 op=LOAD Jan 23 17:28:20.767000 audit: BPF prog-id=105 op=LOAD Jan 23 17:28:20.767000 audit: BPF prog-id=81 op=UNLOAD Jan 23 17:28:20.767000 audit: BPF prog-id=82 op=UNLOAD Jan 23 17:28:20.767000 audit: BPF prog-id=106 op=LOAD Jan 23 17:28:20.767000 audit: BPF prog-id=76 op=UNLOAD Jan 23 17:28:20.915240 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:28:20.915330 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:28:20.915586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:28:20.915638 systemd[1]: kubelet.service: Consumed 78ms CPU time, 95.1M memory peak. Jan 23 17:28:20.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:28:20.924560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:28:21.379169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:28:21.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:21.389645 (kubelet)[3259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:28:21.418807 kubelet[3259]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:28:21.418807 kubelet[3259]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:28:21.418807 kubelet[3259]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:28:21.419180 kubelet[3259]: I0123 17:28:21.418842 3259 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:28:21.634027 kubelet[3259]: I0123 17:28:21.633907 3259 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:28:21.634027 kubelet[3259]: I0123 17:28:21.633941 3259 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:28:21.634604 kubelet[3259]: I0123 17:28:21.634578 3259 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:28:21.658986 kubelet[3259]: E0123 17:28:21.658934 3259 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:28:21.659876 kubelet[3259]: I0123 17:28:21.659765 3259 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:28:21.665458 kubelet[3259]: I0123 17:28:21.665435 3259 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:28:21.668408 kubelet[3259]: I0123 17:28:21.668324 3259 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:28:21.669222 kubelet[3259]: I0123 17:28:21.669181 3259 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:28:21.669473 kubelet[3259]: I0123 17:28:21.669326 3259 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-a-f00ee6181d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:28:21.669621 kubelet[3259]: I0123 17:28:21.669608 3259 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:28:21.669668 kubelet[3259]: I0123 17:28:21.669662 3259 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:28:21.669856 kubelet[3259]: I0123 17:28:21.669842 3259 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:28:21.672171 kubelet[3259]: I0123 17:28:21.672146 3259 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:28:21.672296 kubelet[3259]: I0123 17:28:21.672283 3259 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:28:21.672369 kubelet[3259]: I0123 17:28:21.672360 3259 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:28:21.672421 kubelet[3259]: I0123 17:28:21.672412 3259 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:28:21.676117 kubelet[3259]: W0123 17:28:21.676020 3259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-a-f00ee6181d&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 23 17:28:21.676189 kubelet[3259]: E0123 17:28:21.676132 3259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-a-f00ee6181d&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:28:21.676599 kubelet[3259]: W0123 17:28:21.676554 3259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 23 17:28:21.676599 kubelet[3259]: E0123 17:28:21.676590 3259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:28:21.676706 kubelet[3259]: I0123 17:28:21.676685 3259 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 17:28:21.677245 kubelet[3259]: I0123 17:28:21.677017 3259 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:28:21.677245 kubelet[3259]: W0123 17:28:21.677066 3259 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:28:21.681263 kubelet[3259]: I0123 17:28:21.681223 3259 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:28:21.681812 kubelet[3259]: I0123 17:28:21.681786 3259 server.go:1287] "Started kubelet" Jan 23 17:28:21.683301 kubelet[3259]: I0123 17:28:21.682584 3259 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:28:21.683970 kubelet[3259]: I0123 17:28:21.683950 3259 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:28:21.686008 kubelet[3259]: I0123 17:28:21.685943 3259 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:28:21.686323 kubelet[3259]: I0123 17:28:21.686306 3259 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:28:21.686673 kubelet[3259]: E0123 17:28:21.686563 3259 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.1.0-a-f00ee6181d.188d6c4bdb29aebb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.1.0-a-f00ee6181d,UID:ci-4547.1.0-a-f00ee6181d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.1.0-a-f00ee6181d,},FirstTimestamp:2026-01-23 17:28:21.681761979 +0000 UTC m=+0.288959584,LastTimestamp:2026-01-23 17:28:21.681761979 +0000 UTC m=+0.288959584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.1.0-a-f00ee6181d,}" Jan 23 17:28:21.687884 kubelet[3259]: I0123 17:28:21.687791 3259 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:28:21.689920 kubelet[3259]: I0123 17:28:21.689899 3259 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:28:21.689000 audit[3271]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:21.689000 audit[3271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdd95bd80 a2=0 a3=0 items=0 ppid=3259 pid=3271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 17:28:21.690000 audit[3272]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:21.690000 audit[3272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc6bf1f0 a2=0 a3=0 items=0 ppid=3259 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 17:28:21.693236 kubelet[3259]: I0123 17:28:21.693137 3259 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:28:21.693236 kubelet[3259]: E0123 17:28:21.693174 3259 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:28:21.693575 kubelet[3259]: E0123 17:28:21.693408 3259 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" Jan 23 17:28:21.693000 audit[3274]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:21.693000 audit[3274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcd6583e0 a2=0 a3=0 items=0 ppid=3259 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:28:21.695301 kubelet[3259]: I0123 17:28:21.694705 3259 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:28:21.695301 kubelet[3259]: I0123 17:28:21.694758 3259 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:28:21.695301 kubelet[3259]: W0123 17:28:21.695019 3259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 23 17:28:21.695301 kubelet[3259]: E0123 17:28:21.695055 3259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:28:21.695301 kubelet[3259]: E0123 17:28:21.695096 3259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-f00ee6181d?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Jan 23 17:28:21.695558 kubelet[3259]: I0123 17:28:21.695537 3259 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:28:21.695701 kubelet[3259]: I0123 17:28:21.695685 3259 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:28:21.696768 kubelet[3259]: I0123 17:28:21.696752 3259 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:28:21.695000 audit[3276]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:21.695000 audit[3276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe291bf80 a2=0 a3=0 items=0 ppid=3259 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:28:21.702000 audit[3279]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:21.702000 audit[3279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff23f7b70 a2=0 a3=0 items=0 ppid=3259 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 23 17:28:21.704228 kubelet[3259]: I0123 17:28:21.704175 3259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:28:21.703000 audit[3280]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:21.703000 audit[3280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff67b23c0 a2=0 a3=0 items=0 ppid=3259 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.703000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 17:28:21.705188 kubelet[3259]: I0123 17:28:21.705164 3259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:28:21.705188 kubelet[3259]: I0123 17:28:21.705187 3259 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:28:21.705233 kubelet[3259]: I0123 17:28:21.705209 3259 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:28:21.705233 kubelet[3259]: I0123 17:28:21.705215 3259 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:28:21.705263 kubelet[3259]: E0123 17:28:21.705252 3259 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:28:21.704000 audit[3281]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3281 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:21.704000 audit[3281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc378c90 a2=0 a3=0 items=0 ppid=3259 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 17:28:21.705000 audit[3282]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:21.705000 audit[3282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1cf5650 a2=0 a3=0 items=0 ppid=3259 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 17:28:21.706000 audit[3283]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:21.706000 audit[3283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc7f15930 a2=0 a3=0 items=0 ppid=3259 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 17:28:21.707000 audit[3284]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:21.707000 audit[3284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcb90c5b0 a2=0 a3=0 items=0 ppid=3259 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 17:28:21.708000 audit[3285]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:21.708000 audit[3285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4b141f0 a2=0 a3=0 items=0 ppid=3259 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 17:28:21.709000 audit[3286]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:21.709000 audit[3286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea2bc590 a2=0 a3=0 items=0 ppid=3259 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:21.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 17:28:21.715104 kubelet[3259]: W0123 17:28:21.715008 3259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 23 17:28:21.715104 kubelet[3259]: E0123 17:28:21.715063 3259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:28:21.719168 kubelet[3259]: I0123 17:28:21.718899 3259 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:28:21.719168 kubelet[3259]: I0123 17:28:21.718915 3259 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:28:21.719168 kubelet[3259]: I0123 17:28:21.718937 3259 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:28:21.726701 kubelet[3259]: I0123 17:28:21.726395 3259 policy_none.go:49] "None policy: Start" Jan 23 17:28:21.726701 kubelet[3259]: I0123 17:28:21.726426 3259 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:28:21.726701 kubelet[3259]: I0123 17:28:21.726438 3259 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:28:21.736224 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:28:21.744053 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:28:21.747522 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:28:21.772344 kubelet[3259]: I0123 17:28:21.772108 3259 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:28:21.772344 kubelet[3259]: I0123 17:28:21.772345 3259 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:28:21.772493 kubelet[3259]: I0123 17:28:21.772357 3259 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:28:21.773401 kubelet[3259]: I0123 17:28:21.772796 3259 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:28:21.774968 kubelet[3259]: E0123 17:28:21.774906 3259 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:28:21.774968 kubelet[3259]: E0123 17:28:21.774945 3259 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.1.0-a-f00ee6181d\" not found" Jan 23 17:28:21.816199 systemd[1]: Created slice kubepods-burstable-podd065ae7825ffe2854c2df3e8470da2d9.slice - libcontainer container kubepods-burstable-podd065ae7825ffe2854c2df3e8470da2d9.slice. Jan 23 17:28:21.832955 kubelet[3259]: E0123 17:28:21.832746 3259 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.836225 systemd[1]: Created slice kubepods-burstable-pod0e33337b010e01b84ffe405e3ba1fe22.slice - libcontainer container kubepods-burstable-pod0e33337b010e01b84ffe405e3ba1fe22.slice. Jan 23 17:28:21.838204 kubelet[3259]: E0123 17:28:21.838144 3259 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.839959 systemd[1]: Created slice kubepods-burstable-podf4ec7c180e87c215fda2c89db5d47124.slice - libcontainer container kubepods-burstable-podf4ec7c180e87c215fda2c89db5d47124.slice. Jan 23 17:28:21.841675 kubelet[3259]: E0123 17:28:21.841637 3259 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.874955 kubelet[3259]: I0123 17:28:21.874557 3259 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.874955 kubelet[3259]: E0123 17:28:21.874916 3259 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.895891 kubelet[3259]: I0123 17:28:21.895755 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e33337b010e01b84ffe405e3ba1fe22-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-a-f00ee6181d\" (UID: \"0e33337b010e01b84ffe405e3ba1fe22\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.897191 kubelet[3259]: I0123 17:28:21.897136 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.897191 kubelet[3259]: E0123 17:28:21.896946 3259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-f00ee6181d?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Jan 23 17:28:21.897439 kubelet[3259]: I0123 17:28:21.897341 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.897439 kubelet[3259]: I0123 17:28:21.897375 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d065ae7825ffe2854c2df3e8470da2d9-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-a-f00ee6181d\" (UID: \"d065ae7825ffe2854c2df3e8470da2d9\") " pod="kube-system/kube-scheduler-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.897439 kubelet[3259]: I0123 17:28:21.897413 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e33337b010e01b84ffe405e3ba1fe22-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-a-f00ee6181d\" (UID: \"0e33337b010e01b84ffe405e3ba1fe22\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.897536 kubelet[3259]: I0123 17:28:21.897461 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e33337b010e01b84ffe405e3ba1fe22-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-a-f00ee6181d\" (UID: \"0e33337b010e01b84ffe405e3ba1fe22\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.897536 kubelet[3259]: I0123 17:28:21.897482 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.897536 kubelet[3259]: I0123 17:28:21.897496 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:21.897536 kubelet[3259]: I0123 17:28:21.897509 3259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:22.077563 kubelet[3259]: I0123 17:28:22.077263 3259 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:22.077683 kubelet[3259]: E0123 17:28:22.077630 3259 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:22.134090 containerd[2108]: time="2026-01-23T17:28:22.134040033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-a-f00ee6181d,Uid:d065ae7825ffe2854c2df3e8470da2d9,Namespace:kube-system,Attempt:0,}" Jan 23 17:28:22.140058 containerd[2108]: time="2026-01-23T17:28:22.140024258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-a-f00ee6181d,Uid:0e33337b010e01b84ffe405e3ba1fe22,Namespace:kube-system,Attempt:0,}" Jan 23 17:28:22.143136 containerd[2108]: time="2026-01-23T17:28:22.143105729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-a-f00ee6181d,Uid:f4ec7c180e87c215fda2c89db5d47124,Namespace:kube-system,Attempt:0,}" Jan 23 17:28:22.229296 containerd[2108]: time="2026-01-23T17:28:22.228605084Z" level=info msg="connecting to shim 065153537d7d61c55ec3e14a0d806540637c6436dca36e2569537280672f073f" address="unix:///run/containerd/s/4aa7dd32970ce59e600a39e80419b70d36b633c6f5cfb15532789b4be258be4c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:28:22.249490 systemd[1]: Started cri-containerd-065153537d7d61c55ec3e14a0d806540637c6436dca36e2569537280672f073f.scope - libcontainer container 065153537d7d61c55ec3e14a0d806540637c6436dca36e2569537280672f073f. Jan 23 17:28:22.258000 audit: BPF prog-id=107 op=LOAD Jan 23 17:28:22.259000 audit: BPF prog-id=108 op=LOAD Jan 23 17:28:22.259000 audit[3309]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3298 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353135333533376437643631633535656333653134613064383036 Jan 23 17:28:22.259000 audit: BPF prog-id=108 op=UNLOAD Jan 23 17:28:22.259000 audit[3309]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3298 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353135333533376437643631633535656333653134613064383036 Jan 23 17:28:22.259000 audit: BPF prog-id=109 op=LOAD Jan 23 17:28:22.259000 audit[3309]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3298 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353135333533376437643631633535656333653134613064383036 Jan 23 17:28:22.259000 audit: BPF prog-id=110 op=LOAD Jan 23 17:28:22.259000 audit[3309]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3298 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353135333533376437643631633535656333653134613064383036 Jan 23 17:28:22.259000 audit: BPF prog-id=110 op=UNLOAD Jan 23 17:28:22.259000 audit[3309]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3298 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353135333533376437643631633535656333653134613064383036 Jan 23 17:28:22.259000 audit: BPF prog-id=109 op=UNLOAD Jan 23 17:28:22.259000 audit[3309]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3298 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353135333533376437643631633535656333653134613064383036 Jan 23 17:28:22.259000 audit: BPF prog-id=111 op=LOAD Jan 23 17:28:22.259000 audit[3309]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3298 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353135333533376437643631633535656333653134613064383036 Jan 23 17:28:22.281053 containerd[2108]: time="2026-01-23T17:28:22.281014887Z" level=info msg="connecting to shim 202a56fde74353600ed7330d4361839dcd7da29b449945f2f09f390018b5f446" address="unix:///run/containerd/s/94c5379d97928ee537b2ef236551209975d18fbef6918dca7f4b37eca1f76d1f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:28:22.297437 containerd[2108]: time="2026-01-23T17:28:22.297385515Z" level=info msg="connecting to shim b7b8f2d55c58c7f4c6823048263d901a2c13201567ee7fa9135ae115ed6b8971" address="unix:///run/containerd/s/44537667967adbe100532b273fd0c15fff06e0a795658380d20c7220f6dda036" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:28:22.298227 kubelet[3259]: E0123 17:28:22.298192 3259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-f00ee6181d?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Jan 23 17:28:22.307182 containerd[2108]: time="2026-01-23T17:28:22.307038059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-a-f00ee6181d,Uid:d065ae7825ffe2854c2df3e8470da2d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"065153537d7d61c55ec3e14a0d806540637c6436dca36e2569537280672f073f\"" Jan 23 17:28:22.309512 systemd[1]: Started cri-containerd-202a56fde74353600ed7330d4361839dcd7da29b449945f2f09f390018b5f446.scope - libcontainer container 202a56fde74353600ed7330d4361839dcd7da29b449945f2f09f390018b5f446. Jan 23 17:28:22.311650 containerd[2108]: time="2026-01-23T17:28:22.311623059Z" level=info msg="CreateContainer within sandbox \"065153537d7d61c55ec3e14a0d806540637c6436dca36e2569537280672f073f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:28:22.320000 audit: BPF prog-id=112 op=LOAD Jan 23 17:28:22.320000 audit: BPF prog-id=113 op=LOAD Jan 23 17:28:22.320000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3336 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.320000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230326135366664653734333533363030656437333330643433363138 Jan 23 17:28:22.321000 audit: BPF prog-id=113 op=UNLOAD Jan 23 17:28:22.321000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230326135366664653734333533363030656437333330643433363138 Jan 23 17:28:22.321000 audit: BPF prog-id=114 op=LOAD Jan 23 17:28:22.321000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3336 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230326135366664653734333533363030656437333330643433363138 Jan 23 17:28:22.321000 audit: BPF prog-id=115 op=LOAD Jan 23 17:28:22.321000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3336 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230326135366664653734333533363030656437333330643433363138 Jan 23 17:28:22.321000 audit: BPF prog-id=115 op=UNLOAD Jan 23 17:28:22.321000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230326135366664653734333533363030656437333330643433363138 Jan 23 17:28:22.321000 audit: BPF prog-id=114 op=UNLOAD Jan 23 17:28:22.321000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230326135366664653734333533363030656437333330643433363138 Jan 23 17:28:22.322000 audit: BPF prog-id=116 op=LOAD Jan 23 17:28:22.322000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3336 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.322000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230326135366664653734333533363030656437333330643433363138 Jan 23 17:28:22.328454 systemd[1]: Started cri-containerd-b7b8f2d55c58c7f4c6823048263d901a2c13201567ee7fa9135ae115ed6b8971.scope - libcontainer container b7b8f2d55c58c7f4c6823048263d901a2c13201567ee7fa9135ae115ed6b8971. Jan 23 17:28:22.337000 audit: BPF prog-id=117 op=LOAD Jan 23 17:28:22.338000 audit: BPF prog-id=118 op=LOAD Jan 23 17:28:22.338000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623866326435356335386337663463363832333034383236336439 Jan 23 17:28:22.339000 audit: BPF prog-id=118 op=UNLOAD Jan 23 17:28:22.339000 audit[3386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623866326435356335386337663463363832333034383236336439 Jan 23 17:28:22.339000 audit: BPF prog-id=119 op=LOAD Jan 23 17:28:22.339000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623866326435356335386337663463363832333034383236336439 Jan 23 17:28:22.339000 audit: BPF prog-id=120 op=LOAD Jan 23 17:28:22.339000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623866326435356335386337663463363832333034383236336439 Jan 23 17:28:22.339000 audit: BPF prog-id=120 op=UNLOAD Jan 23 17:28:22.339000 audit[3386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623866326435356335386337663463363832333034383236336439 Jan 23 17:28:22.339000 audit: BPF prog-id=119 op=UNLOAD Jan 23 17:28:22.339000 audit[3386]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623866326435356335386337663463363832333034383236336439 Jan 23 17:28:22.339000 audit: BPF prog-id=121 op=LOAD Jan 23 17:28:22.339000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623866326435356335386337663463363832333034383236336439 Jan 23 17:28:22.341116 containerd[2108]: time="2026-01-23T17:28:22.340786626Z" level=info msg="Container 455c99f17df0b6847c9bbfe20041b82fcb1016448f9234226f584afda9624f93: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:28:22.364822 containerd[2108]: time="2026-01-23T17:28:22.364719213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-a-f00ee6181d,Uid:f4ec7c180e87c215fda2c89db5d47124,Namespace:kube-system,Attempt:0,} returns sandbox id \"202a56fde74353600ed7330d4361839dcd7da29b449945f2f09f390018b5f446\"" Jan 23 17:28:22.364822 containerd[2108]: time="2026-01-23T17:28:22.364782266Z" level=info msg="CreateContainer within sandbox \"065153537d7d61c55ec3e14a0d806540637c6436dca36e2569537280672f073f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"455c99f17df0b6847c9bbfe20041b82fcb1016448f9234226f584afda9624f93\"" Jan 23 17:28:22.366588 containerd[2108]: time="2026-01-23T17:28:22.366531523Z" level=info msg="StartContainer for \"455c99f17df0b6847c9bbfe20041b82fcb1016448f9234226f584afda9624f93\"" Jan 23 17:28:22.367840 containerd[2108]: time="2026-01-23T17:28:22.367816373Z" level=info msg="CreateContainer within sandbox \"202a56fde74353600ed7330d4361839dcd7da29b449945f2f09f390018b5f446\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:28:22.368751 containerd[2108]: time="2026-01-23T17:28:22.368667424Z" level=info msg="connecting to shim 455c99f17df0b6847c9bbfe20041b82fcb1016448f9234226f584afda9624f93" address="unix:///run/containerd/s/4aa7dd32970ce59e600a39e80419b70d36b633c6f5cfb15532789b4be258be4c" protocol=ttrpc version=3 Jan 23 17:28:22.370930 containerd[2108]: time="2026-01-23T17:28:22.370898596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-a-f00ee6181d,Uid:0e33337b010e01b84ffe405e3ba1fe22,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7b8f2d55c58c7f4c6823048263d901a2c13201567ee7fa9135ae115ed6b8971\"" Jan 23 17:28:22.373465 containerd[2108]: time="2026-01-23T17:28:22.373433476Z" level=info msg="CreateContainer within sandbox \"b7b8f2d55c58c7f4c6823048263d901a2c13201567ee7fa9135ae115ed6b8971\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:28:22.385475 systemd[1]: Started cri-containerd-455c99f17df0b6847c9bbfe20041b82fcb1016448f9234226f584afda9624f93.scope - libcontainer container 455c99f17df0b6847c9bbfe20041b82fcb1016448f9234226f584afda9624f93. Jan 23 17:28:22.393000 audit: BPF prog-id=122 op=LOAD Jan 23 17:28:22.394000 audit: BPF prog-id=123 op=LOAD Jan 23 17:28:22.394000 audit[3425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3298 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435356339396631376466306236383437633962626665323030343162 Jan 23 17:28:22.394000 audit: BPF prog-id=123 op=UNLOAD Jan 23 17:28:22.394000 audit[3425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3298 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435356339396631376466306236383437633962626665323030343162 Jan 23 17:28:22.395000 audit: BPF prog-id=124 op=LOAD Jan 23 17:28:22.395000 audit[3425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3298 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435356339396631376466306236383437633962626665323030343162 Jan 23 17:28:22.395000 audit: BPF prog-id=125 op=LOAD Jan 23 17:28:22.395000 audit[3425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3298 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435356339396631376466306236383437633962626665323030343162 Jan 23 17:28:22.395000 audit: BPF prog-id=125 op=UNLOAD Jan 23 17:28:22.395000 audit[3425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3298 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435356339396631376466306236383437633962626665323030343162 Jan 23 17:28:22.395000 audit: BPF prog-id=124 op=UNLOAD Jan 23 17:28:22.395000 audit[3425]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3298 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435356339396631376466306236383437633962626665323030343162 Jan 23 17:28:22.395000 audit: BPF prog-id=126 op=LOAD Jan 23 17:28:22.395000 audit[3425]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3298 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435356339396631376466306236383437633962626665323030343162 Jan 23 17:28:22.397881 containerd[2108]: time="2026-01-23T17:28:22.397814766Z" level=info msg="Container 0b13e1806af88037b3cd11648ea4b17f89116567c9bfe14d3df69f74dd9239aa: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:28:22.426741 containerd[2108]: time="2026-01-23T17:28:22.426251698Z" level=info msg="Container 7062370890b33d98dd6e3bf20ada27104e27dc449016d60f3b284de870672ffb: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:28:22.430310 containerd[2108]: time="2026-01-23T17:28:22.430225660Z" level=info msg="CreateContainer within sandbox \"202a56fde74353600ed7330d4361839dcd7da29b449945f2f09f390018b5f446\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0b13e1806af88037b3cd11648ea4b17f89116567c9bfe14d3df69f74dd9239aa\"" Jan 23 17:28:22.431408 containerd[2108]: time="2026-01-23T17:28:22.431382556Z" level=info msg="StartContainer for \"0b13e1806af88037b3cd11648ea4b17f89116567c9bfe14d3df69f74dd9239aa\"" Jan 23 17:28:22.432963 containerd[2108]: time="2026-01-23T17:28:22.432936223Z" level=info msg="StartContainer for \"455c99f17df0b6847c9bbfe20041b82fcb1016448f9234226f584afda9624f93\" returns successfully" Jan 23 17:28:22.433251 containerd[2108]: time="2026-01-23T17:28:22.433233598Z" level=info msg="connecting to shim 0b13e1806af88037b3cd11648ea4b17f89116567c9bfe14d3df69f74dd9239aa" address="unix:///run/containerd/s/94c5379d97928ee537b2ef236551209975d18fbef6918dca7f4b37eca1f76d1f" protocol=ttrpc version=3 Jan 23 17:28:22.455984 systemd[1]: Started cri-containerd-0b13e1806af88037b3cd11648ea4b17f89116567c9bfe14d3df69f74dd9239aa.scope - libcontainer container 0b13e1806af88037b3cd11648ea4b17f89116567c9bfe14d3df69f74dd9239aa. Jan 23 17:28:22.468000 audit: BPF prog-id=127 op=LOAD Jan 23 17:28:22.468000 audit: BPF prog-id=128 op=LOAD Jan 23 17:28:22.468000 audit[3457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3336 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313365313830366166383830333762336364313136343865613462 Jan 23 17:28:22.468000 audit: BPF prog-id=128 op=UNLOAD Jan 23 17:28:22.468000 audit[3457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313365313830366166383830333762336364313136343865613462 Jan 23 17:28:22.468000 audit: BPF prog-id=129 op=LOAD Jan 23 17:28:22.468000 audit[3457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3336 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313365313830366166383830333762336364313136343865613462 Jan 23 17:28:22.468000 audit: BPF prog-id=130 op=LOAD Jan 23 17:28:22.468000 audit[3457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3336 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313365313830366166383830333762336364313136343865613462 Jan 23 17:28:22.468000 audit: BPF prog-id=130 op=UNLOAD Jan 23 17:28:22.468000 audit[3457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313365313830366166383830333762336364313136343865613462 Jan 23 17:28:22.468000 audit: BPF prog-id=129 op=UNLOAD Jan 23 17:28:22.468000 audit[3457]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313365313830366166383830333762336364313136343865613462 Jan 23 17:28:22.468000 audit: BPF prog-id=131 op=LOAD Jan 23 17:28:22.468000 audit[3457]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3336 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062313365313830366166383830333762336364313136343865613462 Jan 23 17:28:22.471710 containerd[2108]: time="2026-01-23T17:28:22.471673590Z" level=info msg="CreateContainer within sandbox \"b7b8f2d55c58c7f4c6823048263d901a2c13201567ee7fa9135ae115ed6b8971\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7062370890b33d98dd6e3bf20ada27104e27dc449016d60f3b284de870672ffb\"" Jan 23 17:28:22.472598 containerd[2108]: time="2026-01-23T17:28:22.472577410Z" level=info msg="StartContainer for \"7062370890b33d98dd6e3bf20ada27104e27dc449016d60f3b284de870672ffb\"" Jan 23 17:28:22.474675 containerd[2108]: time="2026-01-23T17:28:22.474653111Z" level=info msg="connecting to shim 7062370890b33d98dd6e3bf20ada27104e27dc449016d60f3b284de870672ffb" address="unix:///run/containerd/s/44537667967adbe100532b273fd0c15fff06e0a795658380d20c7220f6dda036" protocol=ttrpc version=3 Jan 23 17:28:22.480202 kubelet[3259]: I0123 17:28:22.480113 3259 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:22.481680 kubelet[3259]: E0123 17:28:22.481520 3259 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:22.503453 systemd[1]: Started cri-containerd-7062370890b33d98dd6e3bf20ada27104e27dc449016d60f3b284de870672ffb.scope - libcontainer container 7062370890b33d98dd6e3bf20ada27104e27dc449016d60f3b284de870672ffb. Jan 23 17:28:22.513481 containerd[2108]: time="2026-01-23T17:28:22.513447977Z" level=info msg="StartContainer for \"0b13e1806af88037b3cd11648ea4b17f89116567c9bfe14d3df69f74dd9239aa\" returns successfully" Jan 23 17:28:22.522000 audit: BPF prog-id=132 op=LOAD Jan 23 17:28:22.524000 audit: BPF prog-id=133 op=LOAD Jan 23 17:28:22.524000 audit[3480]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=3373 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363233373038393062333364393864643665336266323061646132 Jan 23 17:28:22.524000 audit: BPF prog-id=133 op=UNLOAD Jan 23 17:28:22.524000 audit[3480]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363233373038393062333364393864643665336266323061646132 Jan 23 17:28:22.524000 audit: BPF prog-id=134 op=LOAD Jan 23 17:28:22.524000 audit[3480]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3373 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363233373038393062333364393864643665336266323061646132 Jan 23 17:28:22.524000 audit: BPF prog-id=135 op=LOAD Jan 23 17:28:22.524000 audit[3480]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=3373 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363233373038393062333364393864643665336266323061646132 Jan 23 17:28:22.524000 audit: BPF prog-id=135 op=UNLOAD Jan 23 17:28:22.524000 audit[3480]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363233373038393062333364393864643665336266323061646132 Jan 23 17:28:22.524000 audit: BPF prog-id=134 op=UNLOAD Jan 23 17:28:22.524000 audit[3480]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363233373038393062333364393864643665336266323061646132 Jan 23 17:28:22.524000 audit: BPF prog-id=136 op=LOAD Jan 23 17:28:22.524000 audit[3480]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=3373 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:22.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730363233373038393062333364393864643665336266323061646132 Jan 23 17:28:22.569296 containerd[2108]: time="2026-01-23T17:28:22.569171138Z" level=info msg="StartContainer for \"7062370890b33d98dd6e3bf20ada27104e27dc449016d60f3b284de870672ffb\" returns successfully" Jan 23 17:28:22.724302 kubelet[3259]: E0123 17:28:22.724160 3259 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:22.727158 kubelet[3259]: E0123 17:28:22.726845 3259 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:22.730360 kubelet[3259]: E0123 17:28:22.730217 3259 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:23.284112 kubelet[3259]: I0123 17:28:23.284066 3259 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:23.733189 kubelet[3259]: E0123 17:28:23.733116 3259 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:23.733556 kubelet[3259]: E0123 17:28:23.733426 3259 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:23.778253 kubelet[3259]: E0123 17:28:23.778204 3259 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.1.0-a-f00ee6181d\" not found" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:23.944178 kubelet[3259]: I0123 17:28:23.943987 3259 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:23.944178 kubelet[3259]: E0123 17:28:23.944032 3259 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547.1.0-a-f00ee6181d\": node \"ci-4547.1.0-a-f00ee6181d\" not found" Jan 23 17:28:23.998400 kubelet[3259]: I0123 17:28:23.997187 3259 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:24.006475 kubelet[3259]: E0123 17:28:24.006435 3259 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-a-f00ee6181d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:24.006475 kubelet[3259]: I0123 17:28:24.006469 3259 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:24.008503 kubelet[3259]: E0123 17:28:24.008458 3259 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-a-f00ee6181d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:24.008675 kubelet[3259]: I0123 17:28:24.008575 3259 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:24.010207 kubelet[3259]: E0123 17:28:24.010182 3259 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:24.678350 kubelet[3259]: I0123 17:28:24.678305 3259 apiserver.go:52] "Watching apiserver" Jan 23 17:28:24.695512 kubelet[3259]: I0123 17:28:24.695464 3259 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:28:25.821221 systemd[1]: Reload requested from client PID 3525 ('systemctl') (unit session-10.scope)... Jan 23 17:28:25.821236 systemd[1]: Reloading... Jan 23 17:28:25.910303 zram_generator::config[3578]: No configuration found. Jan 23 17:28:26.078069 systemd[1]: Reloading finished in 256 ms. Jan 23 17:28:26.097167 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:28:26.109639 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:28:26.109888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:28:26.125669 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 23 17:28:26.125750 kernel: audit: type=1131 audit(1769189306.109:422): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:26.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:26.109952 systemd[1]: kubelet.service: Consumed 554ms CPU time, 127.8M memory peak. Jan 23 17:28:26.114610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:28:26.126000 audit: BPF prog-id=137 op=LOAD Jan 23 17:28:26.136446 kernel: audit: type=1334 audit(1769189306.126:423): prog-id=137 op=LOAD Jan 23 17:28:26.136529 kernel: audit: type=1334 audit(1769189306.131:424): prog-id=87 op=UNLOAD Jan 23 17:28:26.131000 audit: BPF prog-id=87 op=UNLOAD Jan 23 17:28:26.131000 audit: BPF prog-id=138 op=LOAD Jan 23 17:28:26.131000 audit: BPF prog-id=139 op=LOAD Jan 23 17:28:26.145606 kernel: audit: type=1334 audit(1769189306.131:425): prog-id=138 op=LOAD Jan 23 17:28:26.145667 kernel: audit: type=1334 audit(1769189306.131:426): prog-id=139 op=LOAD Jan 23 17:28:26.131000 audit: BPF prog-id=88 op=UNLOAD Jan 23 17:28:26.149749 kernel: audit: type=1334 audit(1769189306.131:427): prog-id=88 op=UNLOAD Jan 23 17:28:26.131000 audit: BPF prog-id=89 op=UNLOAD Jan 23 17:28:26.153762 kernel: audit: type=1334 audit(1769189306.131:428): prog-id=89 op=UNLOAD Jan 23 17:28:26.157994 kernel: audit: type=1334 audit(1769189306.136:429): prog-id=140 op=LOAD Jan 23 17:28:26.136000 audit: BPF prog-id=140 op=LOAD Jan 23 17:28:26.136000 audit: BPF prog-id=103 op=UNLOAD Jan 23 17:28:26.162159 kernel: audit: type=1334 audit(1769189306.136:430): prog-id=103 op=UNLOAD Jan 23 17:28:26.136000 audit: BPF prog-id=141 op=LOAD Jan 23 17:28:26.166246 kernel: audit: type=1334 audit(1769189306.136:431): prog-id=141 op=LOAD Jan 23 17:28:26.136000 audit: BPF prog-id=142 op=LOAD Jan 23 17:28:26.136000 audit: BPF prog-id=104 op=UNLOAD Jan 23 17:28:26.136000 audit: BPF prog-id=105 op=UNLOAD Jan 23 17:28:26.136000 audit: BPF prog-id=143 op=LOAD Jan 23 17:28:26.136000 audit: BPF prog-id=96 op=UNLOAD Jan 23 17:28:26.136000 audit: BPF prog-id=144 op=LOAD Jan 23 17:28:26.136000 audit: BPF prog-id=145 op=LOAD Jan 23 17:28:26.136000 audit: BPF prog-id=97 op=UNLOAD Jan 23 17:28:26.136000 audit: BPF prog-id=98 op=UNLOAD Jan 23 17:28:26.141000 audit: BPF prog-id=146 op=LOAD Jan 23 17:28:26.141000 audit: BPF prog-id=106 op=UNLOAD Jan 23 17:28:26.145000 audit: BPF prog-id=147 op=LOAD Jan 23 17:28:26.145000 audit: BPF prog-id=93 op=UNLOAD Jan 23 17:28:26.149000 audit: BPF prog-id=148 op=LOAD Jan 23 17:28:26.149000 audit: BPF prog-id=149 op=LOAD Jan 23 17:28:26.149000 audit: BPF prog-id=94 op=UNLOAD Jan 23 17:28:26.149000 audit: BPF prog-id=95 op=UNLOAD Jan 23 17:28:26.153000 audit: BPF prog-id=150 op=LOAD Jan 23 17:28:26.153000 audit: BPF prog-id=102 op=UNLOAD Jan 23 17:28:26.157000 audit: BPF prog-id=151 op=LOAD Jan 23 17:28:26.157000 audit: BPF prog-id=99 op=UNLOAD Jan 23 17:28:26.157000 audit: BPF prog-id=152 op=LOAD Jan 23 17:28:26.157000 audit: BPF prog-id=153 op=LOAD Jan 23 17:28:26.157000 audit: BPF prog-id=100 op=UNLOAD Jan 23 17:28:26.157000 audit: BPF prog-id=101 op=UNLOAD Jan 23 17:28:26.165000 audit: BPF prog-id=154 op=LOAD Jan 23 17:28:26.165000 audit: BPF prog-id=92 op=UNLOAD Jan 23 17:28:26.166000 audit: BPF prog-id=155 op=LOAD Jan 23 17:28:26.166000 audit: BPF prog-id=156 op=LOAD Jan 23 17:28:26.166000 audit: BPF prog-id=90 op=UNLOAD Jan 23 17:28:26.166000 audit: BPF prog-id=91 op=UNLOAD Jan 23 17:28:26.273347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:28:26.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:26.280569 (kubelet)[3639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:28:26.312656 kubelet[3639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:28:26.312656 kubelet[3639]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:28:26.312656 kubelet[3639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:28:26.313036 kubelet[3639]: I0123 17:28:26.312696 3639 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:28:26.317316 kubelet[3639]: I0123 17:28:26.317256 3639 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:28:26.317316 kubelet[3639]: I0123 17:28:26.317310 3639 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:28:26.317536 kubelet[3639]: I0123 17:28:26.317516 3639 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:28:26.318529 kubelet[3639]: I0123 17:28:26.318510 3639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 17:28:26.320304 kubelet[3639]: I0123 17:28:26.320082 3639 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:28:26.323568 kubelet[3639]: I0123 17:28:26.323541 3639 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:28:26.328798 kubelet[3639]: I0123 17:28:26.327561 3639 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:28:26.328798 kubelet[3639]: I0123 17:28:26.327732 3639 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:28:26.328798 kubelet[3639]: I0123 17:28:26.327750 3639 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-a-f00ee6181d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:28:26.328798 kubelet[3639]: I0123 17:28:26.327872 3639 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:28:26.328963 kubelet[3639]: I0123 17:28:26.327878 3639 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:28:26.328963 kubelet[3639]: I0123 17:28:26.327915 3639 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:28:26.328963 kubelet[3639]: I0123 17:28:26.328027 3639 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:28:26.328963 kubelet[3639]: I0123 17:28:26.328037 3639 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:28:26.328963 kubelet[3639]: I0123 17:28:26.328055 3639 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:28:26.328963 kubelet[3639]: I0123 17:28:26.328063 3639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:28:26.332077 kubelet[3639]: I0123 17:28:26.332056 3639 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 17:28:26.332509 kubelet[3639]: I0123 17:28:26.332492 3639 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:28:26.332928 kubelet[3639]: I0123 17:28:26.332909 3639 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:28:26.333009 kubelet[3639]: I0123 17:28:26.333001 3639 server.go:1287] "Started kubelet" Jan 23 17:28:26.337622 kubelet[3639]: I0123 17:28:26.337187 3639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:28:26.346490 kubelet[3639]: I0123 17:28:26.346448 3639 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:28:26.348568 kubelet[3639]: I0123 17:28:26.347771 3639 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:28:26.349299 kubelet[3639]: I0123 17:28:26.348823 3639 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:28:26.350200 kubelet[3639]: I0123 17:28:26.348223 3639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:28:26.350911 kubelet[3639]: I0123 17:28:26.350897 3639 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:28:26.351133 kubelet[3639]: I0123 17:28:26.351110 3639 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:28:26.352314 kubelet[3639]: E0123 17:28:26.351429 3639 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:28:26.352314 kubelet[3639]: I0123 17:28:26.351587 3639 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:28:26.352314 kubelet[3639]: I0123 17:28:26.351686 3639 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:28:26.353300 kubelet[3639]: I0123 17:28:26.352725 3639 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:28:26.353488 kubelet[3639]: I0123 17:28:26.353466 3639 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:28:26.355163 kubelet[3639]: I0123 17:28:26.355102 3639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:28:26.355577 kubelet[3639]: I0123 17:28:26.355354 3639 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:28:26.356643 kubelet[3639]: I0123 17:28:26.356614 3639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:28:26.356643 kubelet[3639]: I0123 17:28:26.356642 3639 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:28:26.356720 kubelet[3639]: I0123 17:28:26.356664 3639 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:28:26.356720 kubelet[3639]: I0123 17:28:26.356668 3639 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:28:26.356720 kubelet[3639]: E0123 17:28:26.356708 3639 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:28:26.401517 kubelet[3639]: I0123 17:28:26.401486 3639 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:28:26.401517 kubelet[3639]: I0123 17:28:26.401504 3639 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:28:26.401517 kubelet[3639]: I0123 17:28:26.401523 3639 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:28:26.401686 kubelet[3639]: I0123 17:28:26.401663 3639 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:28:26.401686 kubelet[3639]: I0123 17:28:26.401670 3639 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:28:26.401686 kubelet[3639]: I0123 17:28:26.401685 3639 policy_none.go:49] "None policy: Start" Jan 23 17:28:26.401737 kubelet[3639]: I0123 17:28:26.401693 3639 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:28:26.401737 kubelet[3639]: I0123 17:28:26.401700 3639 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:28:26.401790 kubelet[3639]: I0123 17:28:26.401776 3639 state_mem.go:75] "Updated machine memory state" Jan 23 17:28:26.406301 kubelet[3639]: I0123 17:28:26.406244 3639 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:28:26.408597 kubelet[3639]: I0123 17:28:26.407939 3639 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:28:26.408597 kubelet[3639]: I0123 17:28:26.407964 3639 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:28:26.412306 kubelet[3639]: I0123 17:28:26.412221 3639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:28:26.416246 kubelet[3639]: E0123 17:28:26.415845 3639 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:28:26.458096 kubelet[3639]: I0123 17:28:26.458030 3639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.458237 kubelet[3639]: I0123 17:28:26.458120 3639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.458744 kubelet[3639]: I0123 17:28:26.458592 3639 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.466073 kubelet[3639]: W0123 17:28:26.466035 3639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 17:28:26.472131 kubelet[3639]: W0123 17:28:26.471961 3639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 17:28:26.472131 kubelet[3639]: W0123 17:28:26.472003 3639 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 17:28:26.523691 kubelet[3639]: I0123 17:28:26.523666 3639 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.536263 kubelet[3639]: I0123 17:28:26.536110 3639 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.536414 kubelet[3639]: I0123 17:28:26.536338 3639 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552713 kubelet[3639]: I0123 17:28:26.552670 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552713 kubelet[3639]: I0123 17:28:26.552702 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552713 kubelet[3639]: I0123 17:28:26.552720 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d065ae7825ffe2854c2df3e8470da2d9-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-a-f00ee6181d\" (UID: \"d065ae7825ffe2854c2df3e8470da2d9\") " pod="kube-system/kube-scheduler-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552713 kubelet[3639]: I0123 17:28:26.552730 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e33337b010e01b84ffe405e3ba1fe22-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-a-f00ee6181d\" (UID: \"0e33337b010e01b84ffe405e3ba1fe22\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552924 kubelet[3639]: I0123 17:28:26.552743 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552924 kubelet[3639]: I0123 17:28:26.552754 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552924 kubelet[3639]: I0123 17:28:26.552764 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e33337b010e01b84ffe405e3ba1fe22-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-a-f00ee6181d\" (UID: \"0e33337b010e01b84ffe405e3ba1fe22\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552924 kubelet[3639]: I0123 17:28:26.552777 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f4ec7c180e87c215fda2c89db5d47124-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-a-f00ee6181d\" (UID: \"f4ec7c180e87c215fda2c89db5d47124\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:26.552924 kubelet[3639]: I0123 17:28:26.552787 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e33337b010e01b84ffe405e3ba1fe22-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-a-f00ee6181d\" (UID: \"0e33337b010e01b84ffe405e3ba1fe22\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" Jan 23 17:28:27.328739 kubelet[3639]: I0123 17:28:27.328684 3639 apiserver.go:52] "Watching apiserver" Jan 23 17:28:27.352449 kubelet[3639]: I0123 17:28:27.352403 3639 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:28:27.400117 kubelet[3639]: I0123 17:28:27.400057 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.1.0-a-f00ee6181d" podStartSLOduration=1.400026818 podStartE2EDuration="1.400026818s" podCreationTimestamp="2026-01-23 17:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:28:27.399178225 +0000 UTC m=+1.116057771" watchObservedRunningTime="2026-01-23 17:28:27.400026818 +0000 UTC m=+1.116906356" Jan 23 17:28:27.410440 kubelet[3639]: I0123 17:28:27.410382 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-f00ee6181d" podStartSLOduration=1.410366719 podStartE2EDuration="1.410366719s" podCreationTimestamp="2026-01-23 17:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:28:27.410034504 +0000 UTC m=+1.126914058" watchObservedRunningTime="2026-01-23 17:28:27.410366719 +0000 UTC m=+1.127246257" Jan 23 17:28:27.441881 kubelet[3639]: I0123 17:28:27.441822 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.1.0-a-f00ee6181d" podStartSLOduration=1.441805321 podStartE2EDuration="1.441805321s" podCreationTimestamp="2026-01-23 17:28:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:28:27.423666474 +0000 UTC m=+1.140546012" watchObservedRunningTime="2026-01-23 17:28:27.441805321 +0000 UTC m=+1.158684859" Jan 23 17:28:31.188912 kubelet[3639]: I0123 17:28:31.188855 3639 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:28:31.189979 containerd[2108]: time="2026-01-23T17:28:31.189882136Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:28:31.190560 kubelet[3639]: I0123 17:28:31.190078 3639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:28:32.116921 systemd[1]: Created slice kubepods-besteffort-pod0bf2f8a4_1b61_4b55_bf57_ea1d8e8c122d.slice - libcontainer container kubepods-besteffort-pod0bf2f8a4_1b61_4b55_bf57_ea1d8e8c122d.slice. Jan 23 17:28:32.185291 kubelet[3639]: I0123 17:28:32.185222 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d-kube-proxy\") pod \"kube-proxy-gndvb\" (UID: \"0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d\") " pod="kube-system/kube-proxy-gndvb" Jan 23 17:28:32.185606 kubelet[3639]: I0123 17:28:32.185495 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d-xtables-lock\") pod \"kube-proxy-gndvb\" (UID: \"0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d\") " pod="kube-system/kube-proxy-gndvb" Jan 23 17:28:32.185606 kubelet[3639]: I0123 17:28:32.185519 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d-lib-modules\") pod \"kube-proxy-gndvb\" (UID: \"0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d\") " pod="kube-system/kube-proxy-gndvb" Jan 23 17:28:32.185606 kubelet[3639]: I0123 17:28:32.185557 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4nf\" (UniqueName: \"kubernetes.io/projected/0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d-kube-api-access-gn4nf\") pod \"kube-proxy-gndvb\" (UID: \"0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d\") " pod="kube-system/kube-proxy-gndvb" Jan 23 17:28:32.313772 systemd[1]: Created slice kubepods-besteffort-pod74f51dc5_432b_42d1_ac47_0a8a96e7abff.slice - libcontainer container kubepods-besteffort-pod74f51dc5_432b_42d1_ac47_0a8a96e7abff.slice. Jan 23 17:28:32.386677 kubelet[3639]: I0123 17:28:32.386543 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fnl\" (UniqueName: \"kubernetes.io/projected/74f51dc5-432b-42d1-ac47-0a8a96e7abff-kube-api-access-q4fnl\") pod \"tigera-operator-7dcd859c48-vkdtb\" (UID: \"74f51dc5-432b-42d1-ac47-0a8a96e7abff\") " pod="tigera-operator/tigera-operator-7dcd859c48-vkdtb" Jan 23 17:28:32.386677 kubelet[3639]: I0123 17:28:32.386588 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74f51dc5-432b-42d1-ac47-0a8a96e7abff-var-lib-calico\") pod \"tigera-operator-7dcd859c48-vkdtb\" (UID: \"74f51dc5-432b-42d1-ac47-0a8a96e7abff\") " pod="tigera-operator/tigera-operator-7dcd859c48-vkdtb" Jan 23 17:28:32.425756 containerd[2108]: time="2026-01-23T17:28:32.425707187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gndvb,Uid:0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d,Namespace:kube-system,Attempt:0,}" Jan 23 17:28:32.497224 containerd[2108]: time="2026-01-23T17:28:32.496798771Z" level=info msg="connecting to shim f75ce4e9040d9827186580e3e15ebab075c74cd653b7d06288aae399fb968be2" address="unix:///run/containerd/s/7b10e10cc92a3f65f35ffe14a7f270a2c7b1444184fc9d88b098b2cb5c063671" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:28:32.522477 systemd[1]: Started cri-containerd-f75ce4e9040d9827186580e3e15ebab075c74cd653b7d06288aae399fb968be2.scope - libcontainer container f75ce4e9040d9827186580e3e15ebab075c74cd653b7d06288aae399fb968be2. Jan 23 17:28:32.538102 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 23 17:28:32.538541 kernel: audit: type=1334 audit(1769189312.529:464): prog-id=157 op=LOAD Jan 23 17:28:32.529000 audit: BPF prog-id=157 op=LOAD Jan 23 17:28:32.537000 audit: BPF prog-id=158 op=LOAD Jan 23 17:28:32.543329 kernel: audit: type=1334 audit(1769189312.537:465): prog-id=158 op=LOAD Jan 23 17:28:32.537000 audit[3705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.578902 kernel: audit: type=1300 audit(1769189312.537:465): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.578987 kernel: audit: type=1327 audit(1769189312.537:465): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.537000 audit: BPF prog-id=158 op=UNLOAD Jan 23 17:28:32.537000 audit[3705]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.600356 kernel: audit: type=1334 audit(1769189312.537:466): prog-id=158 op=UNLOAD Jan 23 17:28:32.600469 kernel: audit: type=1300 audit(1769189312.537:466): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.616340 kernel: audit: type=1327 audit(1769189312.537:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.537000 audit: BPF prog-id=159 op=LOAD Jan 23 17:28:32.618315 containerd[2108]: time="2026-01-23T17:28:32.618095043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vkdtb,Uid:74f51dc5-432b-42d1-ac47-0a8a96e7abff,Namespace:tigera-operator,Attempt:0,}" Jan 23 17:28:32.620520 kernel: audit: type=1334 audit(1769189312.537:467): prog-id=159 op=LOAD Jan 23 17:28:32.537000 audit[3705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.636474 kernel: audit: type=1300 audit(1769189312.537:467): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.652767 kernel: audit: type=1327 audit(1769189312.537:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.537000 audit: BPF prog-id=160 op=LOAD Jan 23 17:28:32.537000 audit[3705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.542000 audit: BPF prog-id=160 op=UNLOAD Jan 23 17:28:32.542000 audit[3705]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.542000 audit: BPF prog-id=159 op=UNLOAD Jan 23 17:28:32.542000 audit[3705]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.542000 audit: BPF prog-id=161 op=LOAD Jan 23 17:28:32.542000 audit[3705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3692 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637356365346539303430643938323731383635383065336531356562 Jan 23 17:28:32.670167 containerd[2108]: time="2026-01-23T17:28:32.670120061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gndvb,Uid:0bf2f8a4-1b61-4b55-bf57-ea1d8e8c122d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f75ce4e9040d9827186580e3e15ebab075c74cd653b7d06288aae399fb968be2\"" Jan 23 17:28:32.673015 containerd[2108]: time="2026-01-23T17:28:32.672980092Z" level=info msg="CreateContainer within sandbox \"f75ce4e9040d9827186580e3e15ebab075c74cd653b7d06288aae399fb968be2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:28:32.703297 containerd[2108]: time="2026-01-23T17:28:32.702782357Z" level=info msg="Container c64b3ffb541dd4a90a4b7762ea59eeb87edb1ab0a7ffbf525a5d0c8c63bf5647: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:28:32.738126 containerd[2108]: time="2026-01-23T17:28:32.738078421Z" level=info msg="CreateContainer within sandbox \"f75ce4e9040d9827186580e3e15ebab075c74cd653b7d06288aae399fb968be2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c64b3ffb541dd4a90a4b7762ea59eeb87edb1ab0a7ffbf525a5d0c8c63bf5647\"" Jan 23 17:28:32.739234 containerd[2108]: time="2026-01-23T17:28:32.739195855Z" level=info msg="StartContainer for \"c64b3ffb541dd4a90a4b7762ea59eeb87edb1ab0a7ffbf525a5d0c8c63bf5647\"" Jan 23 17:28:32.741344 containerd[2108]: time="2026-01-23T17:28:32.741256369Z" level=info msg="connecting to shim c64b3ffb541dd4a90a4b7762ea59eeb87edb1ab0a7ffbf525a5d0c8c63bf5647" address="unix:///run/containerd/s/7b10e10cc92a3f65f35ffe14a7f270a2c7b1444184fc9d88b098b2cb5c063671" protocol=ttrpc version=3 Jan 23 17:28:32.746030 containerd[2108]: time="2026-01-23T17:28:32.745679951Z" level=info msg="connecting to shim 85879e0bc21c1e816c97dc0db642d88b0f3288ce4aed9edccda2dd3b7f0e7208" address="unix:///run/containerd/s/2064b1a9e1f39feb166239b489f86213e2a35162147ee6a2d30316cb442a5828" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:28:32.758447 systemd[1]: Started cri-containerd-c64b3ffb541dd4a90a4b7762ea59eeb87edb1ab0a7ffbf525a5d0c8c63bf5647.scope - libcontainer container c64b3ffb541dd4a90a4b7762ea59eeb87edb1ab0a7ffbf525a5d0c8c63bf5647. Jan 23 17:28:32.776428 systemd[1]: Started cri-containerd-85879e0bc21c1e816c97dc0db642d88b0f3288ce4aed9edccda2dd3b7f0e7208.scope - libcontainer container 85879e0bc21c1e816c97dc0db642d88b0f3288ce4aed9edccda2dd3b7f0e7208. Jan 23 17:28:32.784000 audit: BPF prog-id=162 op=LOAD Jan 23 17:28:32.785000 audit: BPF prog-id=163 op=LOAD Jan 23 17:28:32.785000 audit[3762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=3741 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383739653062633231633165383136633937646330646236343264 Jan 23 17:28:32.785000 audit: BPF prog-id=163 op=UNLOAD Jan 23 17:28:32.785000 audit[3762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3741 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383739653062633231633165383136633937646330646236343264 Jan 23 17:28:32.785000 audit: BPF prog-id=164 op=LOAD Jan 23 17:28:32.785000 audit[3762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=3741 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383739653062633231633165383136633937646330646236343264 Jan 23 17:28:32.785000 audit: BPF prog-id=165 op=LOAD Jan 23 17:28:32.785000 audit[3762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=3741 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383739653062633231633165383136633937646330646236343264 Jan 23 17:28:32.785000 audit: BPF prog-id=165 op=UNLOAD Jan 23 17:28:32.785000 audit[3762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3741 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383739653062633231633165383136633937646330646236343264 Jan 23 17:28:32.785000 audit: BPF prog-id=164 op=UNLOAD Jan 23 17:28:32.785000 audit[3762]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3741 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383739653062633231633165383136633937646330646236343264 Jan 23 17:28:32.785000 audit: BPF prog-id=166 op=LOAD Jan 23 17:28:32.785000 audit[3762]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=3741 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835383739653062633231633165383136633937646330646236343264 Jan 23 17:28:32.790000 audit: BPF prog-id=167 op=LOAD Jan 23 17:28:32.790000 audit[3736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3692 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336346233666662353431646434613930613462373736326561353965 Jan 23 17:28:32.790000 audit: BPF prog-id=168 op=LOAD Jan 23 17:28:32.790000 audit[3736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3692 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336346233666662353431646434613930613462373736326561353965 Jan 23 17:28:32.790000 audit: BPF prog-id=168 op=UNLOAD Jan 23 17:28:32.790000 audit[3736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336346233666662353431646434613930613462373736326561353965 Jan 23 17:28:32.790000 audit: BPF prog-id=167 op=UNLOAD Jan 23 17:28:32.790000 audit[3736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3692 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336346233666662353431646434613930613462373736326561353965 Jan 23 17:28:32.790000 audit: BPF prog-id=169 op=LOAD Jan 23 17:28:32.790000 audit[3736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3692 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336346233666662353431646434613930613462373736326561353965 Jan 23 17:28:32.822603 containerd[2108]: time="2026-01-23T17:28:32.822558441Z" level=info msg="StartContainer for \"c64b3ffb541dd4a90a4b7762ea59eeb87edb1ab0a7ffbf525a5d0c8c63bf5647\" returns successfully" Jan 23 17:28:32.823260 containerd[2108]: time="2026-01-23T17:28:32.823031240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vkdtb,Uid:74f51dc5-432b-42d1-ac47-0a8a96e7abff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"85879e0bc21c1e816c97dc0db642d88b0f3288ce4aed9edccda2dd3b7f0e7208\"" Jan 23 17:28:32.826298 containerd[2108]: time="2026-01-23T17:28:32.826063474Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 17:28:32.905000 audit[3841]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:32.905000 audit[3841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc82c9220 a2=0 a3=1 items=0 ppid=3770 pid=3841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 17:28:32.909000 audit[3843]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=3843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:32.909000 audit[3843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff29aaba0 a2=0 a3=1 items=0 ppid=3770 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 17:28:32.911000 audit[3845]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=3845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:32.911000 audit[3845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4506ed0 a2=0 a3=1 items=0 ppid=3770 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 17:28:32.912000 audit[3847]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=3847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:32.912000 audit[3847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd83e0010 a2=0 a3=1 items=0 ppid=3770 pid=3847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 17:28:32.912000 audit[3848]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:32.912000 audit[3848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbc2d9f0 a2=0 a3=1 items=0 ppid=3770 pid=3848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 17:28:32.913000 audit[3849]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=3849 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:32.913000 audit[3849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2429df0 a2=0 a3=1 items=0 ppid=3770 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:32.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 17:28:33.005000 audit[3850]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.005000 audit[3850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc11ee460 a2=0 a3=1 items=0 ppid=3770 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 17:28:33.007000 audit[3852]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.007000 audit[3852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdea2e3b0 a2=0 a3=1 items=0 ppid=3770 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 23 17:28:33.010000 audit[3855]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.010000 audit[3855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe745f190 a2=0 a3=1 items=0 ppid=3770 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.010000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 23 17:28:33.011000 audit[3856]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.011000 audit[3856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffbd013e0 a2=0 a3=1 items=0 ppid=3770 pid=3856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 17:28:33.014000 audit[3858]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.014000 audit[3858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce82d390 a2=0 a3=1 items=0 ppid=3770 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 17:28:33.015000 audit[3859]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.015000 audit[3859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1770c40 a2=0 a3=1 items=0 ppid=3770 pid=3859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.015000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 17:28:33.017000 audit[3861]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.017000 audit[3861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd68365d0 a2=0 a3=1 items=0 ppid=3770 pid=3861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 17:28:33.019000 audit[3864]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.019000 audit[3864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb34aeb0 a2=0 a3=1 items=0 ppid=3770 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 23 17:28:33.020000 audit[3865]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.020000 audit[3865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc00c9d10 a2=0 a3=1 items=0 ppid=3770 pid=3865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 17:28:33.022000 audit[3867]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.022000 audit[3867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc7a62a00 a2=0 a3=1 items=0 ppid=3770 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 17:28:33.023000 audit[3868]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.023000 audit[3868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff637f310 a2=0 a3=1 items=0 ppid=3770 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.023000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 17:28:33.025000 audit[3870]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.025000 audit[3870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffde6eace0 a2=0 a3=1 items=0 ppid=3770 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 17:28:33.028000 audit[3873]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.028000 audit[3873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcf0979d0 a2=0 a3=1 items=0 ppid=3770 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.028000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 17:28:33.031000 audit[3876]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.031000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2e72310 a2=0 a3=1 items=0 ppid=3770 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 17:28:33.032000 audit[3877]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.032000 audit[3877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeba6fe00 a2=0 a3=1 items=0 ppid=3770 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.032000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 17:28:33.034000 audit[3879]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.034000 audit[3879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffed0c7d50 a2=0 a3=1 items=0 ppid=3770 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:28:33.037000 audit[3882]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.037000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffddc091d0 a2=0 a3=1 items=0 ppid=3770 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:28:33.038000 audit[3883]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.038000 audit[3883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc3add40 a2=0 a3=1 items=0 ppid=3770 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 17:28:33.040000 audit[3885]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:28:33.040000 audit[3885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffcb5135a0 a2=0 a3=1 items=0 ppid=3770 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 17:28:33.098000 audit[3891]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3891 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:33.098000 audit[3891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff47378d0 a2=0 a3=1 items=0 ppid=3770 pid=3891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.098000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:33.106000 audit[3891]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3891 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:33.106000 audit[3891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff47378d0 a2=0 a3=1 items=0 ppid=3770 pid=3891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.106000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:33.108000 audit[3896]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.108000 audit[3896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe2066c90 a2=0 a3=1 items=0 ppid=3770 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 17:28:33.110000 audit[3898]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.110000 audit[3898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd27428d0 a2=0 a3=1 items=0 ppid=3770 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.110000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 23 17:28:33.113000 audit[3901]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.113000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffcb3f9b0 a2=0 a3=1 items=0 ppid=3770 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 23 17:28:33.114000 audit[3902]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.114000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd6ff1f0 a2=0 a3=1 items=0 ppid=3770 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.114000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 17:28:33.117000 audit[3904]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.117000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd7ab4e00 a2=0 a3=1 items=0 ppid=3770 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 17:28:33.118000 audit[3905]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.118000 audit[3905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8d6f9a0 a2=0 a3=1 items=0 ppid=3770 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 17:28:33.120000 audit[3907]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.120000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdde90120 a2=0 a3=1 items=0 ppid=3770 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 23 17:28:33.123000 audit[3910]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.123000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffcfe11940 a2=0 a3=1 items=0 ppid=3770 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 23 17:28:33.124000 audit[3911]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.124000 audit[3911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6b17840 a2=0 a3=1 items=0 ppid=3770 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.124000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 17:28:33.127000 audit[3913]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.127000 audit[3913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc5234890 a2=0 a3=1 items=0 ppid=3770 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 17:28:33.129000 audit[3914]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.129000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe0341060 a2=0 a3=1 items=0 ppid=3770 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 17:28:33.131000 audit[3916]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.131000 audit[3916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7021980 a2=0 a3=1 items=0 ppid=3770 pid=3916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 23 17:28:33.134000 audit[3919]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.134000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdd4ef8e0 a2=0 a3=1 items=0 ppid=3770 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 23 17:28:33.137000 audit[3922]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.137000 audit[3922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb166280 a2=0 a3=1 items=0 ppid=3770 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 23 17:28:33.138000 audit[3923]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.138000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc67ceac0 a2=0 a3=1 items=0 ppid=3770 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.138000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 17:28:33.141000 audit[3925]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.141000 audit[3925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff5c802d0 a2=0 a3=1 items=0 ppid=3770 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:28:33.144000 audit[3928]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.144000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdfd7eb00 a2=0 a3=1 items=0 ppid=3770 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:28:33.145000 audit[3929]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.145000 audit[3929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcfdea6b0 a2=0 a3=1 items=0 ppid=3770 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 17:28:33.147000 audit[3931]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.147000 audit[3931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe1a47800 a2=0 a3=1 items=0 ppid=3770 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 17:28:33.148000 audit[3932]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.148000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde814380 a2=0 a3=1 items=0 ppid=3770 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 17:28:33.151000 audit[3934]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.151000 audit[3934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff64c6080 a2=0 a3=1 items=0 ppid=3770 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:28:33.154000 audit[3937]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:28:33.154000 audit[3937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc601a450 a2=0 a3=1 items=0 ppid=3770 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:28:33.158000 audit[3939]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 17:28:33.158000 audit[3939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd0d7b8c0 a2=0 a3=1 items=0 ppid=3770 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.158000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:33.158000 audit[3939]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 17:28:33.158000 audit[3939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd0d7b8c0 a2=0 a3=1 items=0 ppid=3770 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:33.158000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:33.323328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884454248.mount: Deactivated successfully. Jan 23 17:28:33.409345 kubelet[3639]: I0123 17:28:33.409288 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gndvb" podStartSLOduration=1.40925179 podStartE2EDuration="1.40925179s" podCreationTimestamp="2026-01-23 17:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:28:33.408783415 +0000 UTC m=+7.125662961" watchObservedRunningTime="2026-01-23 17:28:33.40925179 +0000 UTC m=+7.126131328" Jan 23 17:28:34.270139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205026570.mount: Deactivated successfully. Jan 23 17:28:34.749527 containerd[2108]: time="2026-01-23T17:28:34.749470467Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:34.753839 containerd[2108]: time="2026-01-23T17:28:34.753788451Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 23 17:28:34.757655 containerd[2108]: time="2026-01-23T17:28:34.757619986Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:34.762866 containerd[2108]: time="2026-01-23T17:28:34.762829333Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:34.763529 containerd[2108]: time="2026-01-23T17:28:34.763297381Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.937207544s" Jan 23 17:28:34.763529 containerd[2108]: time="2026-01-23T17:28:34.763324702Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 17:28:34.767325 containerd[2108]: time="2026-01-23T17:28:34.766769932Z" level=info msg="CreateContainer within sandbox \"85879e0bc21c1e816c97dc0db642d88b0f3288ce4aed9edccda2dd3b7f0e7208\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 17:28:34.795520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393587248.mount: Deactivated successfully. Jan 23 17:28:34.799450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912726674.mount: Deactivated successfully. Jan 23 17:28:34.800290 containerd[2108]: time="2026-01-23T17:28:34.799854528Z" level=info msg="Container b6aad2500e967247e239c247d6ad4530f5702ba9e7f6e6f8f156d40b276ad6e0: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:28:34.817314 containerd[2108]: time="2026-01-23T17:28:34.817242270Z" level=info msg="CreateContainer within sandbox \"85879e0bc21c1e816c97dc0db642d88b0f3288ce4aed9edccda2dd3b7f0e7208\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b6aad2500e967247e239c247d6ad4530f5702ba9e7f6e6f8f156d40b276ad6e0\"" Jan 23 17:28:34.818167 containerd[2108]: time="2026-01-23T17:28:34.818060893Z" level=info msg="StartContainer for \"b6aad2500e967247e239c247d6ad4530f5702ba9e7f6e6f8f156d40b276ad6e0\"" Jan 23 17:28:34.819038 containerd[2108]: time="2026-01-23T17:28:34.818874299Z" level=info msg="connecting to shim b6aad2500e967247e239c247d6ad4530f5702ba9e7f6e6f8f156d40b276ad6e0" address="unix:///run/containerd/s/2064b1a9e1f39feb166239b489f86213e2a35162147ee6a2d30316cb442a5828" protocol=ttrpc version=3 Jan 23 17:28:34.836457 systemd[1]: Started cri-containerd-b6aad2500e967247e239c247d6ad4530f5702ba9e7f6e6f8f156d40b276ad6e0.scope - libcontainer container b6aad2500e967247e239c247d6ad4530f5702ba9e7f6e6f8f156d40b276ad6e0. Jan 23 17:28:34.844000 audit: BPF prog-id=170 op=LOAD Jan 23 17:28:34.844000 audit: BPF prog-id=171 op=LOAD Jan 23 17:28:34.844000 audit[3948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3741 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236616164323530306539363732343765323339633234376436616434 Jan 23 17:28:34.844000 audit: BPF prog-id=171 op=UNLOAD Jan 23 17:28:34.844000 audit[3948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3741 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236616164323530306539363732343765323339633234376436616434 Jan 23 17:28:34.844000 audit: BPF prog-id=172 op=LOAD Jan 23 17:28:34.844000 audit[3948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3741 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236616164323530306539363732343765323339633234376436616434 Jan 23 17:28:34.844000 audit: BPF prog-id=173 op=LOAD Jan 23 17:28:34.844000 audit[3948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3741 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236616164323530306539363732343765323339633234376436616434 Jan 23 17:28:34.844000 audit: BPF prog-id=173 op=UNLOAD Jan 23 17:28:34.844000 audit[3948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3741 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236616164323530306539363732343765323339633234376436616434 Jan 23 17:28:34.844000 audit: BPF prog-id=172 op=UNLOAD Jan 23 17:28:34.844000 audit[3948]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3741 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236616164323530306539363732343765323339633234376436616434 Jan 23 17:28:34.844000 audit: BPF prog-id=174 op=LOAD Jan 23 17:28:34.844000 audit[3948]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3741 pid=3948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:34.844000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236616164323530306539363732343765323339633234376436616434 Jan 23 17:28:34.866225 containerd[2108]: time="2026-01-23T17:28:34.866142160Z" level=info msg="StartContainer for \"b6aad2500e967247e239c247d6ad4530f5702ba9e7f6e6f8f156d40b276ad6e0\" returns successfully" Jan 23 17:28:35.413375 kubelet[3639]: I0123 17:28:35.412886 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-vkdtb" podStartSLOduration=1.473875166 podStartE2EDuration="3.412867342s" podCreationTimestamp="2026-01-23 17:28:32 +0000 UTC" firstStartedPulling="2026-01-23 17:28:32.825221778 +0000 UTC m=+6.542101324" lastFinishedPulling="2026-01-23 17:28:34.764213962 +0000 UTC m=+8.481093500" observedRunningTime="2026-01-23 17:28:35.412694659 +0000 UTC m=+9.129574197" watchObservedRunningTime="2026-01-23 17:28:35.412867342 +0000 UTC m=+9.129746880" Jan 23 17:28:40.115114 sudo[2610]: pam_unix(sudo:session): session closed for user root Jan 23 17:28:40.113000 audit[2610]: USER_END pid=2610 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:40.120144 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 17:28:40.120235 kernel: audit: type=1106 audit(1769189320.113:544): pid=2610 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:40.115000 audit[2610]: CRED_DISP pid=2610 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:40.148808 kernel: audit: type=1104 audit(1769189320.115:545): pid=2610 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:28:40.195610 sshd[2609]: Connection closed by 10.200.16.10 port 45742 Jan 23 17:28:40.197798 sshd-session[2605]: pam_unix(sshd:session): session closed for user core Jan 23 17:28:40.198000 audit[2605]: USER_END pid=2605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:40.218721 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:45742.service: Deactivated successfully. Jan 23 17:28:40.223096 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:28:40.225197 systemd[1]: session-10.scope: Consumed 3.377s CPU time, 220.5M memory peak. Jan 23 17:28:40.206000 audit[2605]: CRED_DISP pid=2605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:40.241707 kernel: audit: type=1106 audit(1769189320.198:546): pid=2605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:40.241849 kernel: audit: type=1104 audit(1769189320.206:547): pid=2605 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:28:40.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.34:22-10.200.16.10:45742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:40.244721 systemd-logind[2081]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:28:40.257291 kernel: audit: type=1131 audit(1769189320.220:548): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.34:22-10.200.16.10:45742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:28:40.260532 systemd-logind[2081]: Removed session 10. Jan 23 17:28:42.453000 audit[4023]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:42.453000 audit[4023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdd4475d0 a2=0 a3=1 items=0 ppid=3770 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:42.494423 kernel: audit: type=1325 audit(1769189322.453:549): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:42.494529 kernel: audit: type=1300 audit(1769189322.453:549): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdd4475d0 a2=0 a3=1 items=0 ppid=3770 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:42.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:42.508838 kernel: audit: type=1327 audit(1769189322.453:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:42.468000 audit[4023]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:42.520205 kernel: audit: type=1325 audit(1769189322.468:550): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:42.468000 audit[4023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdd4475d0 a2=0 a3=1 items=0 ppid=3770 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:42.540284 kernel: audit: type=1300 audit(1769189322.468:550): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdd4475d0 a2=0 a3=1 items=0 ppid=3770 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:42.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:42.511000 audit[4025]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:42.511000 audit[4025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff865e390 a2=0 a3=1 items=0 ppid=3770 pid=4025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:42.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:42.521000 audit[4025]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:42.521000 audit[4025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff865e390 a2=0 a3=1 items=0 ppid=3770 pid=4025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:42.521000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:44.721000 audit[4027]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:44.721000 audit[4027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd64ef5a0 a2=0 a3=1 items=0 ppid=3770 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:44.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:44.724000 audit[4027]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:44.724000 audit[4027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd64ef5a0 a2=0 a3=1 items=0 ppid=3770 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:44.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:44.753000 audit[4029]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:44.753000 audit[4029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff0b5dce0 a2=0 a3=1 items=0 ppid=3770 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:44.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:44.758000 audit[4029]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:44.758000 audit[4029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0b5dce0 a2=0 a3=1 items=0 ppid=3770 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:44.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:45.848999 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 23 17:28:45.849142 kernel: audit: type=1325 audit(1769189325.772:557): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:45.772000 audit[4032]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:45.772000 audit[4032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd31f2670 a2=0 a3=1 items=0 ppid=3770 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:45.772000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:45.883264 kernel: audit: type=1300 audit(1769189325.772:557): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd31f2670 a2=0 a3=1 items=0 ppid=3770 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:45.883504 kernel: audit: type=1327 audit(1769189325.772:557): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:45.881000 audit[4032]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:45.893753 kernel: audit: type=1325 audit(1769189325.881:558): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4032 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:45.881000 audit[4032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd31f2670 a2=0 a3=1 items=0 ppid=3770 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:45.917117 kernel: audit: type=1300 audit(1769189325.881:558): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd31f2670 a2=0 a3=1 items=0 ppid=3770 pid=4032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:45.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:45.927298 kernel: audit: type=1327 audit(1769189325.881:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:46.900000 audit[4034]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:46.900000 audit[4034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffca8e0e40 a2=0 a3=1 items=0 ppid=3770 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:46.930033 kernel: audit: type=1325 audit(1769189326.900:559): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:46.930359 kernel: audit: type=1300 audit(1769189326.900:559): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffca8e0e40 a2=0 a3=1 items=0 ppid=3770 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:46.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:46.945967 kernel: audit: type=1327 audit(1769189326.900:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:46.916000 audit[4034]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:46.957138 kernel: audit: type=1325 audit(1769189326.916:560): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:46.916000 audit[4034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca8e0e40 a2=0 a3=1 items=0 ppid=3770 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:46.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:46.982520 systemd[1]: Created slice kubepods-besteffort-podd8c8f838_9a91_44ba_95fd_c236336e624b.slice - libcontainer container kubepods-besteffort-podd8c8f838_9a91_44ba_95fd_c236336e624b.slice. Jan 23 17:28:47.057689 kubelet[3639]: I0123 17:28:47.057582 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d8c8f838-9a91-44ba-95fd-c236336e624b-typha-certs\") pod \"calico-typha-9d4c44679-99vwm\" (UID: \"d8c8f838-9a91-44ba-95fd-c236336e624b\") " pod="calico-system/calico-typha-9d4c44679-99vwm" Jan 23 17:28:47.057689 kubelet[3639]: I0123 17:28:47.057630 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgmc9\" (UniqueName: \"kubernetes.io/projected/d8c8f838-9a91-44ba-95fd-c236336e624b-kube-api-access-qgmc9\") pod \"calico-typha-9d4c44679-99vwm\" (UID: \"d8c8f838-9a91-44ba-95fd-c236336e624b\") " pod="calico-system/calico-typha-9d4c44679-99vwm" Jan 23 17:28:47.057689 kubelet[3639]: I0123 17:28:47.057646 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8c8f838-9a91-44ba-95fd-c236336e624b-tigera-ca-bundle\") pod \"calico-typha-9d4c44679-99vwm\" (UID: \"d8c8f838-9a91-44ba-95fd-c236336e624b\") " pod="calico-system/calico-typha-9d4c44679-99vwm" Jan 23 17:28:47.174932 systemd[1]: Created slice kubepods-besteffort-pod8e864e0b_68f2_4a03_ab3e_dcaa2cad5e7b.slice - libcontainer container kubepods-besteffort-pod8e864e0b_68f2_4a03_ab3e_dcaa2cad5e7b.slice. Jan 23 17:28:47.259259 kubelet[3639]: I0123 17:28:47.259219 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-lib-modules\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259589 kubelet[3639]: I0123 17:28:47.259495 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-var-run-calico\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259589 kubelet[3639]: I0123 17:28:47.259532 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-flexvol-driver-host\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259589 kubelet[3639]: I0123 17:28:47.259547 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-node-certs\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259589 kubelet[3639]: I0123 17:28:47.259558 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z4sf\" (UniqueName: \"kubernetes.io/projected/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-kube-api-access-5z4sf\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259706 kubelet[3639]: I0123 17:28:47.259605 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-policysync\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259706 kubelet[3639]: I0123 17:28:47.259648 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-cni-log-dir\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259706 kubelet[3639]: I0123 17:28:47.259663 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-cni-net-dir\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259706 kubelet[3639]: I0123 17:28:47.259675 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-var-lib-calico\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259706 kubelet[3639]: I0123 17:28:47.259688 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-xtables-lock\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259778 kubelet[3639]: I0123 17:28:47.259699 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-cni-bin-dir\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.259778 kubelet[3639]: I0123 17:28:47.259715 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b-tigera-ca-bundle\") pod \"calico-node-9jn88\" (UID: \"8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b\") " pod="calico-system/calico-node-9jn88" Jan 23 17:28:47.290401 containerd[2108]: time="2026-01-23T17:28:47.290363261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d4c44679-99vwm,Uid:d8c8f838-9a91-44ba-95fd-c236336e624b,Namespace:calico-system,Attempt:0,}" Jan 23 17:28:47.348328 containerd[2108]: time="2026-01-23T17:28:47.348280425Z" level=info msg="connecting to shim 85a1340642ee1368fbdc8481c47a9214c365a54762c196bc7f9dda903afb13da" address="unix:///run/containerd/s/ca5abdbfc32d7af73dd140a97f72afddd2fc03260f0dfeaf02a04718940e3520" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:28:47.366850 kubelet[3639]: E0123 17:28:47.366655 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.366850 kubelet[3639]: W0123 17:28:47.366679 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.366850 kubelet[3639]: E0123 17:28:47.366698 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.377927 kubelet[3639]: E0123 17:28:47.377814 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:28:47.379126 kubelet[3639]: E0123 17:28:47.379096 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.379126 kubelet[3639]: W0123 17:28:47.379114 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.379126 kubelet[3639]: E0123 17:28:47.379132 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.389525 kubelet[3639]: E0123 17:28:47.389489 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.389695 kubelet[3639]: W0123 17:28:47.389605 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.389695 kubelet[3639]: E0123 17:28:47.389627 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.394593 systemd[1]: Started cri-containerd-85a1340642ee1368fbdc8481c47a9214c365a54762c196bc7f9dda903afb13da.scope - libcontainer container 85a1340642ee1368fbdc8481c47a9214c365a54762c196bc7f9dda903afb13da. Jan 23 17:28:47.397959 kubelet[3639]: E0123 17:28:47.397358 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.397959 kubelet[3639]: W0123 17:28:47.397378 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.397959 kubelet[3639]: E0123 17:28:47.397397 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.397959 kubelet[3639]: E0123 17:28:47.397821 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.397959 kubelet[3639]: W0123 17:28:47.397832 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.397959 kubelet[3639]: E0123 17:28:47.397869 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.398143 kubelet[3639]: E0123 17:28:47.398007 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.398143 kubelet[3639]: W0123 17:28:47.398013 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.398143 kubelet[3639]: E0123 17:28:47.398020 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.398143 kubelet[3639]: E0123 17:28:47.398137 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.398143 kubelet[3639]: W0123 17:28:47.398141 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.398212 kubelet[3639]: E0123 17:28:47.398147 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.398956 kubelet[3639]: E0123 17:28:47.398255 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.398956 kubelet[3639]: W0123 17:28:47.398283 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.398956 kubelet[3639]: E0123 17:28:47.398291 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.398956 kubelet[3639]: E0123 17:28:47.398384 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.398956 kubelet[3639]: W0123 17:28:47.398388 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.398956 kubelet[3639]: E0123 17:28:47.398393 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.398956 kubelet[3639]: E0123 17:28:47.398476 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.398956 kubelet[3639]: W0123 17:28:47.398480 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.398956 kubelet[3639]: E0123 17:28:47.398485 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.398956 kubelet[3639]: E0123 17:28:47.398557 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.399997 kubelet[3639]: W0123 17:28:47.398560 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.399997 kubelet[3639]: E0123 17:28:47.398565 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.399997 kubelet[3639]: E0123 17:28:47.398647 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.399997 kubelet[3639]: W0123 17:28:47.398652 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.399997 kubelet[3639]: E0123 17:28:47.398658 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.399997 kubelet[3639]: E0123 17:28:47.398729 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.399997 kubelet[3639]: W0123 17:28:47.398733 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.399997 kubelet[3639]: E0123 17:28:47.398738 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.399997 kubelet[3639]: E0123 17:28:47.398964 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.399997 kubelet[3639]: W0123 17:28:47.398975 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400434 kubelet[3639]: E0123 17:28:47.398985 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400434 kubelet[3639]: E0123 17:28:47.399167 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400434 kubelet[3639]: W0123 17:28:47.399175 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400434 kubelet[3639]: E0123 17:28:47.399184 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400434 kubelet[3639]: E0123 17:28:47.399313 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400434 kubelet[3639]: W0123 17:28:47.399319 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400434 kubelet[3639]: E0123 17:28:47.399326 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400434 kubelet[3639]: E0123 17:28:47.399411 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400434 kubelet[3639]: W0123 17:28:47.399416 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400434 kubelet[3639]: E0123 17:28:47.399421 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400585 kubelet[3639]: E0123 17:28:47.399493 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400585 kubelet[3639]: W0123 17:28:47.399499 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400585 kubelet[3639]: E0123 17:28:47.399503 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400585 kubelet[3639]: E0123 17:28:47.399569 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400585 kubelet[3639]: W0123 17:28:47.399572 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400585 kubelet[3639]: E0123 17:28:47.399577 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400585 kubelet[3639]: E0123 17:28:47.399653 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400585 kubelet[3639]: W0123 17:28:47.399657 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400585 kubelet[3639]: E0123 17:28:47.399661 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400585 kubelet[3639]: E0123 17:28:47.399734 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400760 kubelet[3639]: W0123 17:28:47.399738 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400760 kubelet[3639]: E0123 17:28:47.399743 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400760 kubelet[3639]: E0123 17:28:47.399821 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400760 kubelet[3639]: W0123 17:28:47.399825 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400760 kubelet[3639]: E0123 17:28:47.399829 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.400760 kubelet[3639]: E0123 17:28:47.399904 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.400760 kubelet[3639]: W0123 17:28:47.399909 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.400760 kubelet[3639]: E0123 17:28:47.399914 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.409000 audit: BPF prog-id=175 op=LOAD Jan 23 17:28:47.409000 audit: BPF prog-id=176 op=LOAD Jan 23 17:28:47.409000 audit[4056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835613133343036343265653133363866626463383438316334376139 Jan 23 17:28:47.409000 audit: BPF prog-id=176 op=UNLOAD Jan 23 17:28:47.409000 audit[4056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835613133343036343265653133363866626463383438316334376139 Jan 23 17:28:47.409000 audit: BPF prog-id=177 op=LOAD Jan 23 17:28:47.409000 audit[4056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835613133343036343265653133363866626463383438316334376139 Jan 23 17:28:47.410000 audit: BPF prog-id=178 op=LOAD Jan 23 17:28:47.410000 audit[4056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835613133343036343265653133363866626463383438316334376139 Jan 23 17:28:47.410000 audit: BPF prog-id=178 op=UNLOAD Jan 23 17:28:47.410000 audit[4056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835613133343036343265653133363866626463383438316334376139 Jan 23 17:28:47.410000 audit: BPF prog-id=177 op=UNLOAD Jan 23 17:28:47.410000 audit[4056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835613133343036343265653133363866626463383438316334376139 Jan 23 17:28:47.410000 audit: BPF prog-id=179 op=LOAD Jan 23 17:28:47.410000 audit[4056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=4044 pid=4056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835613133343036343265653133363866626463383438316334376139 Jan 23 17:28:47.446822 containerd[2108]: time="2026-01-23T17:28:47.445756307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d4c44679-99vwm,Uid:d8c8f838-9a91-44ba-95fd-c236336e624b,Namespace:calico-system,Attempt:0,} returns sandbox id \"85a1340642ee1368fbdc8481c47a9214c365a54762c196bc7f9dda903afb13da\"" Jan 23 17:28:47.448751 containerd[2108]: time="2026-01-23T17:28:47.448722778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 17:28:47.463689 kubelet[3639]: E0123 17:28:47.463240 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.463689 kubelet[3639]: W0123 17:28:47.463686 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.464055 kubelet[3639]: E0123 17:28:47.463710 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.464055 kubelet[3639]: I0123 17:28:47.463746 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/25c5832d-778b-4f5d-974d-1be8e7376fdb-kubelet-dir\") pod \"csi-node-driver-dtvct\" (UID: \"25c5832d-778b-4f5d-974d-1be8e7376fdb\") " pod="calico-system/csi-node-driver-dtvct" Jan 23 17:28:47.464244 kubelet[3639]: E0123 17:28:47.464094 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.464244 kubelet[3639]: W0123 17:28:47.464107 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.464244 kubelet[3639]: E0123 17:28:47.464145 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.464244 kubelet[3639]: I0123 17:28:47.464161 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/25c5832d-778b-4f5d-974d-1be8e7376fdb-varrun\") pod \"csi-node-driver-dtvct\" (UID: \"25c5832d-778b-4f5d-974d-1be8e7376fdb\") " pod="calico-system/csi-node-driver-dtvct" Jan 23 17:28:47.464519 kubelet[3639]: E0123 17:28:47.464404 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.464519 kubelet[3639]: W0123 17:28:47.464415 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.464519 kubelet[3639]: E0123 17:28:47.464452 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.464519 kubelet[3639]: I0123 17:28:47.464471 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/25c5832d-778b-4f5d-974d-1be8e7376fdb-socket-dir\") pod \"csi-node-driver-dtvct\" (UID: \"25c5832d-778b-4f5d-974d-1be8e7376fdb\") " pod="calico-system/csi-node-driver-dtvct" Jan 23 17:28:47.464891 kubelet[3639]: E0123 17:28:47.464804 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.464891 kubelet[3639]: W0123 17:28:47.464818 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.464891 kubelet[3639]: E0123 17:28:47.464832 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.464891 kubelet[3639]: I0123 17:28:47.464845 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9pgj\" (UniqueName: \"kubernetes.io/projected/25c5832d-778b-4f5d-974d-1be8e7376fdb-kube-api-access-l9pgj\") pod \"csi-node-driver-dtvct\" (UID: \"25c5832d-778b-4f5d-974d-1be8e7376fdb\") " pod="calico-system/csi-node-driver-dtvct" Jan 23 17:28:47.465000 kubelet[3639]: E0123 17:28:47.464986 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.465000 kubelet[3639]: W0123 17:28:47.464993 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.465290 kubelet[3639]: E0123 17:28:47.465094 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.465290 kubelet[3639]: I0123 17:28:47.465119 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/25c5832d-778b-4f5d-974d-1be8e7376fdb-registration-dir\") pod \"csi-node-driver-dtvct\" (UID: \"25c5832d-778b-4f5d-974d-1be8e7376fdb\") " pod="calico-system/csi-node-driver-dtvct" Jan 23 17:28:47.465290 kubelet[3639]: E0123 17:28:47.465288 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.465412 kubelet[3639]: W0123 17:28:47.465296 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.465495 kubelet[3639]: E0123 17:28:47.465473 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.465712 kubelet[3639]: E0123 17:28:47.465592 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.465712 kubelet[3639]: W0123 17:28:47.465599 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.465712 kubelet[3639]: E0123 17:28:47.465639 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.466135 kubelet[3639]: E0123 17:28:47.466114 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.466135 kubelet[3639]: W0123 17:28:47.466139 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.466135 kubelet[3639]: E0123 17:28:47.466156 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.466504 kubelet[3639]: E0123 17:28:47.466306 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.466504 kubelet[3639]: W0123 17:28:47.466313 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.466504 kubelet[3639]: E0123 17:28:47.466325 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.466504 kubelet[3639]: E0123 17:28:47.466432 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.466504 kubelet[3639]: W0123 17:28:47.466437 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.466504 kubelet[3639]: E0123 17:28:47.466444 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.466838 kubelet[3639]: E0123 17:28:47.466533 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.466838 kubelet[3639]: W0123 17:28:47.466538 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.466838 kubelet[3639]: E0123 17:28:47.466543 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.466838 kubelet[3639]: E0123 17:28:47.466648 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.466838 kubelet[3639]: W0123 17:28:47.466659 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.466838 kubelet[3639]: E0123 17:28:47.466664 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.467332 kubelet[3639]: E0123 17:28:47.467266 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.467332 kubelet[3639]: W0123 17:28:47.467293 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.467332 kubelet[3639]: E0123 17:28:47.467303 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.467510 kubelet[3639]: E0123 17:28:47.467449 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.467510 kubelet[3639]: W0123 17:28:47.467455 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.467510 kubelet[3639]: E0123 17:28:47.467462 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.468086 kubelet[3639]: E0123 17:28:47.467576 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.468086 kubelet[3639]: W0123 17:28:47.467584 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.468086 kubelet[3639]: E0123 17:28:47.467590 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.481602 containerd[2108]: time="2026-01-23T17:28:47.481492442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jn88,Uid:8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b,Namespace:calico-system,Attempt:0,}" Jan 23 17:28:47.542496 containerd[2108]: time="2026-01-23T17:28:47.542449057Z" level=info msg="connecting to shim c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e" address="unix:///run/containerd/s/1353152d335387966a04662c8f9633b8434e9635a5562e4122b8375703006cc4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:28:47.561464 systemd[1]: Started cri-containerd-c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e.scope - libcontainer container c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e. Jan 23 17:28:47.565990 kubelet[3639]: E0123 17:28:47.565966 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.566476 kubelet[3639]: W0123 17:28:47.566093 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.566476 kubelet[3639]: E0123 17:28:47.566119 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.566808 kubelet[3639]: E0123 17:28:47.566788 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.566808 kubelet[3639]: W0123 17:28:47.566805 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.567125 kubelet[3639]: E0123 17:28:47.566824 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.567428 kubelet[3639]: E0123 17:28:47.567302 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.567428 kubelet[3639]: W0123 17:28:47.567316 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.567428 kubelet[3639]: E0123 17:28:47.567334 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.568034 kubelet[3639]: E0123 17:28:47.567966 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.568373 kubelet[3639]: W0123 17:28:47.568176 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.568373 kubelet[3639]: E0123 17:28:47.568205 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.568867 kubelet[3639]: E0123 17:28:47.568770 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.569079 kubelet[3639]: W0123 17:28:47.568928 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.569079 kubelet[3639]: E0123 17:28:47.569057 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.569872 kubelet[3639]: E0123 17:28:47.569852 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.570141 kubelet[3639]: W0123 17:28:47.569959 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.570369 kubelet[3639]: E0123 17:28:47.570329 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.570900 kubelet[3639]: E0123 17:28:47.570719 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.570900 kubelet[3639]: W0123 17:28:47.570730 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.570900 kubelet[3639]: E0123 17:28:47.570861 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.571076 kubelet[3639]: E0123 17:28:47.571064 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.571140 kubelet[3639]: W0123 17:28:47.571130 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.571403 kubelet[3639]: E0123 17:28:47.571352 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.571615 kubelet[3639]: E0123 17:28:47.571501 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.571615 kubelet[3639]: W0123 17:28:47.571510 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.571615 kubelet[3639]: E0123 17:28:47.571546 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.571778 kubelet[3639]: E0123 17:28:47.571736 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.571778 kubelet[3639]: W0123 17:28:47.571746 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.571778 kubelet[3639]: E0123 17:28:47.571774 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.572081 kubelet[3639]: E0123 17:28:47.572009 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.572081 kubelet[3639]: W0123 17:28:47.572020 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.572081 kubelet[3639]: E0123 17:28:47.572037 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.572265 kubelet[3639]: E0123 17:28:47.572247 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.572390 kubelet[3639]: W0123 17:28:47.572262 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.572418 kubelet[3639]: E0123 17:28:47.572402 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.572673 kubelet[3639]: E0123 17:28:47.572657 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.572673 kubelet[3639]: W0123 17:28:47.572670 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.572784 kubelet[3639]: E0123 17:28:47.572718 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.573013 kubelet[3639]: E0123 17:28:47.572996 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.573013 kubelet[3639]: W0123 17:28:47.573011 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.573084 kubelet[3639]: E0123 17:28:47.573070 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.573355 kubelet[3639]: E0123 17:28:47.573338 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.573355 kubelet[3639]: W0123 17:28:47.573352 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.573487 kubelet[3639]: E0123 17:28:47.573403 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.573645 kubelet[3639]: E0123 17:28:47.573628 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.573645 kubelet[3639]: W0123 17:28:47.573641 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.573853 kubelet[3639]: E0123 17:28:47.573690 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.574094 kubelet[3639]: E0123 17:28:47.574076 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.574094 kubelet[3639]: W0123 17:28:47.574091 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.574183 kubelet[3639]: E0123 17:28:47.574139 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.574450 kubelet[3639]: E0123 17:28:47.574345 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.574488 kubelet[3639]: W0123 17:28:47.574450 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.574579 kubelet[3639]: E0123 17:28:47.574505 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.574796 kubelet[3639]: E0123 17:28:47.574779 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.574796 kubelet[3639]: W0123 17:28:47.574794 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.574887 kubelet[3639]: E0123 17:28:47.574852 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.575118 kubelet[3639]: E0123 17:28:47.575100 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.575118 kubelet[3639]: W0123 17:28:47.575115 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.575238 kubelet[3639]: E0123 17:28:47.575160 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.575461 kubelet[3639]: E0123 17:28:47.575444 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.575461 kubelet[3639]: W0123 17:28:47.575457 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.575605 kubelet[3639]: E0123 17:28:47.575473 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.575814 kubelet[3639]: E0123 17:28:47.575794 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.575814 kubelet[3639]: W0123 17:28:47.575812 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.575907 kubelet[3639]: E0123 17:28:47.575877 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.576304 kubelet[3639]: E0123 17:28:47.576248 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.576392 kubelet[3639]: W0123 17:28:47.576262 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.576453 kubelet[3639]: E0123 17:28:47.576435 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.576691 kubelet[3639]: E0123 17:28:47.576673 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.576691 kubelet[3639]: W0123 17:28:47.576689 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.577365 kubelet[3639]: E0123 17:28:47.576752 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.577365 kubelet[3639]: E0123 17:28:47.577096 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.577365 kubelet[3639]: W0123 17:28:47.577109 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.577365 kubelet[3639]: E0123 17:28:47.577121 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.578000 audit: BPF prog-id=180 op=LOAD Jan 23 17:28:47.580000 audit: BPF prog-id=181 op=LOAD Jan 23 17:28:47.580000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4142 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386563373432386337333338366365643565616631373965376361 Jan 23 17:28:47.580000 audit: BPF prog-id=181 op=UNLOAD Jan 23 17:28:47.580000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386563373432386337333338366365643565616631373965376361 Jan 23 17:28:47.580000 audit: BPF prog-id=182 op=LOAD Jan 23 17:28:47.580000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4142 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386563373432386337333338366365643565616631373965376361 Jan 23 17:28:47.580000 audit: BPF prog-id=183 op=LOAD Jan 23 17:28:47.580000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4142 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386563373432386337333338366365643565616631373965376361 Jan 23 17:28:47.580000 audit: BPF prog-id=183 op=UNLOAD Jan 23 17:28:47.580000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386563373432386337333338366365643565616631373965376361 Jan 23 17:28:47.580000 audit: BPF prog-id=182 op=UNLOAD Jan 23 17:28:47.580000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386563373432386337333338366365643565616631373965376361 Jan 23 17:28:47.580000 audit: BPF prog-id=184 op=LOAD Jan 23 17:28:47.580000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4142 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334386563373432386337333338366365643565616631373965376361 Jan 23 17:28:47.586327 kubelet[3639]: E0123 17:28:47.586170 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:47.586327 kubelet[3639]: W0123 17:28:47.586190 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:47.586327 kubelet[3639]: E0123 17:28:47.586209 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:47.604030 containerd[2108]: time="2026-01-23T17:28:47.603903012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jn88,Uid:8e864e0b-68f2-4a03-ab3e-dcaa2cad5e7b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e\"" Jan 23 17:28:47.944000 audit[4206]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:47.944000 audit[4206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffed21d5b0 a2=0 a3=1 items=0 ppid=3770 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:47.950000 audit[4206]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:47.950000 audit[4206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed21d5b0 a2=0 a3=1 items=0 ppid=3770 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:47.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:48.894963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518555968.mount: Deactivated successfully. Jan 23 17:28:49.321475 containerd[2108]: time="2026-01-23T17:28:49.321431702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:49.334048 containerd[2108]: time="2026-01-23T17:28:49.333977574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 23 17:28:49.338132 containerd[2108]: time="2026-01-23T17:28:49.338074812Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:49.343377 containerd[2108]: time="2026-01-23T17:28:49.343318739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:49.343882 containerd[2108]: time="2026-01-23T17:28:49.343670207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.894916795s" Jan 23 17:28:49.343882 containerd[2108]: time="2026-01-23T17:28:49.343699568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 17:28:49.345410 containerd[2108]: time="2026-01-23T17:28:49.345209197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 17:28:49.355465 containerd[2108]: time="2026-01-23T17:28:49.355433971Z" level=info msg="CreateContainer within sandbox \"85a1340642ee1368fbdc8481c47a9214c365a54762c196bc7f9dda903afb13da\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 17:28:49.357326 kubelet[3639]: E0123 17:28:49.356932 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:28:49.380702 containerd[2108]: time="2026-01-23T17:28:49.380660044Z" level=info msg="Container ea53fea3513a6ead4721898bfb0c4a28b4cc1925d905627bd8059d6790926a56: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:28:49.419149 containerd[2108]: time="2026-01-23T17:28:49.419097082Z" level=info msg="CreateContainer within sandbox \"85a1340642ee1368fbdc8481c47a9214c365a54762c196bc7f9dda903afb13da\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea53fea3513a6ead4721898bfb0c4a28b4cc1925d905627bd8059d6790926a56\"" Jan 23 17:28:49.419978 containerd[2108]: time="2026-01-23T17:28:49.419943922Z" level=info msg="StartContainer for \"ea53fea3513a6ead4721898bfb0c4a28b4cc1925d905627bd8059d6790926a56\"" Jan 23 17:28:49.421723 containerd[2108]: time="2026-01-23T17:28:49.421501137Z" level=info msg="connecting to shim ea53fea3513a6ead4721898bfb0c4a28b4cc1925d905627bd8059d6790926a56" address="unix:///run/containerd/s/ca5abdbfc32d7af73dd140a97f72afddd2fc03260f0dfeaf02a04718940e3520" protocol=ttrpc version=3 Jan 23 17:28:49.439461 systemd[1]: Started cri-containerd-ea53fea3513a6ead4721898bfb0c4a28b4cc1925d905627bd8059d6790926a56.scope - libcontainer container ea53fea3513a6ead4721898bfb0c4a28b4cc1925d905627bd8059d6790926a56. Jan 23 17:28:49.448000 audit: BPF prog-id=185 op=LOAD Jan 23 17:28:49.448000 audit: BPF prog-id=186 op=LOAD Jan 23 17:28:49.448000 audit[4217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4044 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:49.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353366656133353133613665616434373231383938626662306334 Jan 23 17:28:49.448000 audit: BPF prog-id=186 op=UNLOAD Jan 23 17:28:49.448000 audit[4217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:49.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353366656133353133613665616434373231383938626662306334 Jan 23 17:28:49.448000 audit: BPF prog-id=187 op=LOAD Jan 23 17:28:49.448000 audit[4217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4044 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:49.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353366656133353133613665616434373231383938626662306334 Jan 23 17:28:49.448000 audit: BPF prog-id=188 op=LOAD Jan 23 17:28:49.448000 audit[4217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4044 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:49.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353366656133353133613665616434373231383938626662306334 Jan 23 17:28:49.448000 audit: BPF prog-id=188 op=UNLOAD Jan 23 17:28:49.448000 audit[4217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:49.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353366656133353133613665616434373231383938626662306334 Jan 23 17:28:49.448000 audit: BPF prog-id=187 op=UNLOAD Jan 23 17:28:49.448000 audit[4217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4044 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:49.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353366656133353133613665616434373231383938626662306334 Jan 23 17:28:49.448000 audit: BPF prog-id=189 op=LOAD Jan 23 17:28:49.448000 audit[4217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4044 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:49.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353366656133353133613665616434373231383938626662306334 Jan 23 17:28:49.478098 containerd[2108]: time="2026-01-23T17:28:49.478058273Z" level=info msg="StartContainer for \"ea53fea3513a6ead4721898bfb0c4a28b4cc1925d905627bd8059d6790926a56\" returns successfully" Jan 23 17:28:50.520522 kubelet[3639]: E0123 17:28:50.520472 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.520522 kubelet[3639]: W0123 17:28:50.520495 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.521514 kubelet[3639]: E0123 17:28:50.521029 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.521514 kubelet[3639]: E0123 17:28:50.521226 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.521514 kubelet[3639]: W0123 17:28:50.521234 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.521514 kubelet[3639]: E0123 17:28:50.521303 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.521514 kubelet[3639]: E0123 17:28:50.521440 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.521514 kubelet[3639]: W0123 17:28:50.521447 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.521514 kubelet[3639]: E0123 17:28:50.521454 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.522054 kubelet[3639]: E0123 17:28:50.521810 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.522054 kubelet[3639]: W0123 17:28:50.521821 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.522054 kubelet[3639]: E0123 17:28:50.521931 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.522362 kubelet[3639]: E0123 17:28:50.522307 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.522362 kubelet[3639]: W0123 17:28:50.522317 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.522362 kubelet[3639]: E0123 17:28:50.522331 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.522673 kubelet[3639]: E0123 17:28:50.522621 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.522673 kubelet[3639]: W0123 17:28:50.522632 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.522673 kubelet[3639]: E0123 17:28:50.522641 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.522990 kubelet[3639]: E0123 17:28:50.522955 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.522990 kubelet[3639]: W0123 17:28:50.522965 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.522990 kubelet[3639]: E0123 17:28:50.522974 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.523346 kubelet[3639]: E0123 17:28:50.523263 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.523346 kubelet[3639]: W0123 17:28:50.523293 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.523346 kubelet[3639]: E0123 17:28:50.523305 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.523652 kubelet[3639]: E0123 17:28:50.523606 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.523652 kubelet[3639]: W0123 17:28:50.523616 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.523652 kubelet[3639]: E0123 17:28:50.523625 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.523915 kubelet[3639]: E0123 17:28:50.523906 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.524040 kubelet[3639]: W0123 17:28:50.523970 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.524040 kubelet[3639]: E0123 17:28:50.524003 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.524224 kubelet[3639]: E0123 17:28:50.524195 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.524336 kubelet[3639]: W0123 17:28:50.524204 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.524336 kubelet[3639]: E0123 17:28:50.524300 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.524577 kubelet[3639]: E0123 17:28:50.524500 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.524577 kubelet[3639]: W0123 17:28:50.524509 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.524577 kubelet[3639]: E0123 17:28:50.524518 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.524791 kubelet[3639]: E0123 17:28:50.524783 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.524875 kubelet[3639]: W0123 17:28:50.524830 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.524875 kubelet[3639]: E0123 17:28:50.524842 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.525074 kubelet[3639]: E0123 17:28:50.525028 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.525074 kubelet[3639]: W0123 17:28:50.525036 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.525074 kubelet[3639]: E0123 17:28:50.525044 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.525296 kubelet[3639]: E0123 17:28:50.525250 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.525296 kubelet[3639]: W0123 17:28:50.525258 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.525434 kubelet[3639]: E0123 17:28:50.525370 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.594337 kubelet[3639]: E0123 17:28:50.594238 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.594897 kubelet[3639]: W0123 17:28:50.594437 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.594897 kubelet[3639]: E0123 17:28:50.594575 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.595330 kubelet[3639]: E0123 17:28:50.595239 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.595330 kubelet[3639]: W0123 17:28:50.595254 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.595330 kubelet[3639]: E0123 17:28:50.595282 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.595720 kubelet[3639]: E0123 17:28:50.595707 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.595889 kubelet[3639]: W0123 17:28:50.595781 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.595889 kubelet[3639]: E0123 17:28:50.595869 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.596681 kubelet[3639]: E0123 17:28:50.596574 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.596681 kubelet[3639]: W0123 17:28:50.596664 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.596905 kubelet[3639]: E0123 17:28:50.596823 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.597663 kubelet[3639]: E0123 17:28:50.597600 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.597663 kubelet[3639]: W0123 17:28:50.597612 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.598370 kubelet[3639]: E0123 17:28:50.597938 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.598852 kubelet[3639]: E0123 17:28:50.598773 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.598948 kubelet[3639]: W0123 17:28:50.598925 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.599038 kubelet[3639]: E0123 17:28:50.599026 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.599394 kubelet[3639]: E0123 17:28:50.599375 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.599394 kubelet[3639]: W0123 17:28:50.599390 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.599466 kubelet[3639]: E0123 17:28:50.599406 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.599635 kubelet[3639]: E0123 17:28:50.599607 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.599635 kubelet[3639]: W0123 17:28:50.599618 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.599635 kubelet[3639]: E0123 17:28:50.599630 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.599963 kubelet[3639]: E0123 17:28:50.599944 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.599963 kubelet[3639]: W0123 17:28:50.599956 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.599963 kubelet[3639]: E0123 17:28:50.599970 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.600228 kubelet[3639]: E0123 17:28:50.600216 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.600228 kubelet[3639]: W0123 17:28:50.600227 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.600454 kubelet[3639]: E0123 17:28:50.600241 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.600570 kubelet[3639]: E0123 17:28:50.600543 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.600570 kubelet[3639]: W0123 17:28:50.600557 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.600698 kubelet[3639]: E0123 17:28:50.600606 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.600831 kubelet[3639]: E0123 17:28:50.600796 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.600831 kubelet[3639]: W0123 17:28:50.600807 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.600831 kubelet[3639]: E0123 17:28:50.600820 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.601038 kubelet[3639]: E0123 17:28:50.601005 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.601038 kubelet[3639]: W0123 17:28:50.601020 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.601038 kubelet[3639]: E0123 17:28:50.601033 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.601215 kubelet[3639]: E0123 17:28:50.601192 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.601215 kubelet[3639]: W0123 17:28:50.601201 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.601215 kubelet[3639]: E0123 17:28:50.601213 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.601541 kubelet[3639]: E0123 17:28:50.601509 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.601541 kubelet[3639]: W0123 17:28:50.601522 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.601541 kubelet[3639]: E0123 17:28:50.601536 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.602296 kubelet[3639]: E0123 17:28:50.602138 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.602296 kubelet[3639]: W0123 17:28:50.602154 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.602296 kubelet[3639]: E0123 17:28:50.602171 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.602398 kubelet[3639]: E0123 17:28:50.602386 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.602416 kubelet[3639]: W0123 17:28:50.602397 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.602416 kubelet[3639]: E0123 17:28:50.602406 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.603086 kubelet[3639]: E0123 17:28:50.603051 3639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:28:50.603308 kubelet[3639]: W0123 17:28:50.603239 3639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:28:50.603308 kubelet[3639]: E0123 17:28:50.603256 3639 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:28:50.638178 containerd[2108]: time="2026-01-23T17:28:50.638126059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:50.642588 containerd[2108]: time="2026-01-23T17:28:50.642530506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 17:28:50.646210 containerd[2108]: time="2026-01-23T17:28:50.646172439Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:50.652728 containerd[2108]: time="2026-01-23T17:28:50.652682500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:50.653165 containerd[2108]: time="2026-01-23T17:28:50.653137598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.307899271s" Jan 23 17:28:50.653216 containerd[2108]: time="2026-01-23T17:28:50.653169832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 17:28:50.656341 containerd[2108]: time="2026-01-23T17:28:50.656310096Z" level=info msg="CreateContainer within sandbox \"c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 17:28:50.685313 containerd[2108]: time="2026-01-23T17:28:50.683219959Z" level=info msg="Container 842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:28:50.763463 containerd[2108]: time="2026-01-23T17:28:50.763262566Z" level=info msg="CreateContainer within sandbox \"c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079\"" Jan 23 17:28:50.764384 containerd[2108]: time="2026-01-23T17:28:50.764296384Z" level=info msg="StartContainer for \"842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079\"" Jan 23 17:28:50.766537 containerd[2108]: time="2026-01-23T17:28:50.766487723Z" level=info msg="connecting to shim 842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079" address="unix:///run/containerd/s/1353152d335387966a04662c8f9633b8434e9635a5562e4122b8375703006cc4" protocol=ttrpc version=3 Jan 23 17:28:50.791459 systemd[1]: Started cri-containerd-842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079.scope - libcontainer container 842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079. Jan 23 17:28:50.875000 audit: BPF prog-id=190 op=LOAD Jan 23 17:28:50.879817 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 23 17:28:50.879909 kernel: audit: type=1334 audit(1769189330.875:587): prog-id=190 op=LOAD Jan 23 17:28:50.875000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4142 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:50.900808 kernel: audit: type=1300 audit(1769189330.875:587): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4142 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:50.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834326131343732633431346634353233646565666463313630376263 Jan 23 17:28:50.917316 kernel: audit: type=1327 audit(1769189330.875:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834326131343732633431346634353233646565666463313630376263 Jan 23 17:28:50.875000 audit: BPF prog-id=191 op=LOAD Jan 23 17:28:50.923225 kernel: audit: type=1334 audit(1769189330.875:588): prog-id=191 op=LOAD Jan 23 17:28:50.875000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4142 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:50.939841 kernel: audit: type=1300 audit(1769189330.875:588): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4142 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:50.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834326131343732633431346634353233646565666463313630376263 Jan 23 17:28:50.956364 kernel: audit: type=1327 audit(1769189330.875:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834326131343732633431346634353233646565666463313630376263 Jan 23 17:28:50.883000 audit: BPF prog-id=191 op=UNLOAD Jan 23 17:28:50.961655 kernel: audit: type=1334 audit(1769189330.883:589): prog-id=191 op=UNLOAD Jan 23 17:28:50.883000 audit[4292]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:50.977236 kernel: audit: type=1300 audit(1769189330.883:589): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:50.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834326131343732633431346634353233646565666463313630376263 Jan 23 17:28:50.995545 kernel: audit: type=1327 audit(1769189330.883:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834326131343732633431346634353233646565666463313630376263 Jan 23 17:28:50.883000 audit: BPF prog-id=190 op=UNLOAD Jan 23 17:28:51.001575 kernel: audit: type=1334 audit(1769189330.883:590): prog-id=190 op=UNLOAD Jan 23 17:28:50.883000 audit[4292]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:50.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834326131343732633431346634353233646565666463313630376263 Jan 23 17:28:50.883000 audit: BPF prog-id=192 op=LOAD Jan 23 17:28:50.883000 audit[4292]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4142 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:50.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834326131343732633431346634353233646565666463313630376263 Jan 23 17:28:51.006049 containerd[2108]: time="2026-01-23T17:28:51.006018558Z" level=info msg="StartContainer for \"842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079\" returns successfully" Jan 23 17:28:51.013794 systemd[1]: cri-containerd-842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079.scope: Deactivated successfully. Jan 23 17:28:51.016000 audit: BPF prog-id=192 op=UNLOAD Jan 23 17:28:51.021820 containerd[2108]: time="2026-01-23T17:28:51.021773819Z" level=info msg="received container exit event container_id:\"842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079\" id:\"842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079\" pid:4305 exited_at:{seconds:1769189331 nanos:21035338}" Jan 23 17:28:51.040959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-842a1472c414f4523deefdc1607bc10e5026d4d22056cead6728b4309a033079-rootfs.mount: Deactivated successfully. Jan 23 17:28:51.357512 kubelet[3639]: E0123 17:28:51.357391 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:28:51.444595 kubelet[3639]: I0123 17:28:51.444382 3639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:28:51.459901 kubelet[3639]: I0123 17:28:51.459738 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9d4c44679-99vwm" podStartSLOduration=3.563153148 podStartE2EDuration="5.459719484s" podCreationTimestamp="2026-01-23 17:28:46 +0000 UTC" firstStartedPulling="2026-01-23 17:28:47.447974952 +0000 UTC m=+21.164854490" lastFinishedPulling="2026-01-23 17:28:49.34454128 +0000 UTC m=+23.061420826" observedRunningTime="2026-01-23 17:28:50.455463857 +0000 UTC m=+24.172343403" watchObservedRunningTime="2026-01-23 17:28:51.459719484 +0000 UTC m=+25.176599030" Jan 23 17:28:51.983000 audit[4344]: NETFILTER_CFG table=filter:122 family=2 entries=21 op=nft_register_rule pid=4344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:51.983000 audit[4344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffc151160 a2=0 a3=1 items=0 ppid=3770 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:51.983000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:51.986000 audit[4344]: NETFILTER_CFG table=nat:123 family=2 entries=19 op=nft_register_chain pid=4344 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:28:51.986000 audit[4344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffc151160 a2=0 a3=1 items=0 ppid=3770 pid=4344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:51.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:28:52.453442 containerd[2108]: time="2026-01-23T17:28:52.453105494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 17:28:53.357568 kubelet[3639]: E0123 17:28:53.357516 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:28:54.670084 containerd[2108]: time="2026-01-23T17:28:54.670030772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:54.675293 containerd[2108]: time="2026-01-23T17:28:54.675220876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 23 17:28:54.680523 containerd[2108]: time="2026-01-23T17:28:54.680488880Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:54.685578 containerd[2108]: time="2026-01-23T17:28:54.685542064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:28:54.685982 containerd[2108]: time="2026-01-23T17:28:54.685956201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.232812881s" Jan 23 17:28:54.686033 containerd[2108]: time="2026-01-23T17:28:54.686016093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 17:28:54.689239 containerd[2108]: time="2026-01-23T17:28:54.689206377Z" level=info msg="CreateContainer within sandbox \"c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:28:54.729148 containerd[2108]: time="2026-01-23T17:28:54.728325131Z" level=info msg="Container 8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:28:54.768125 containerd[2108]: time="2026-01-23T17:28:54.768072483Z" level=info msg="CreateContainer within sandbox \"c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035\"" Jan 23 17:28:54.769078 containerd[2108]: time="2026-01-23T17:28:54.769006940Z" level=info msg="StartContainer for \"8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035\"" Jan 23 17:28:54.770505 containerd[2108]: time="2026-01-23T17:28:54.770478431Z" level=info msg="connecting to shim 8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035" address="unix:///run/containerd/s/1353152d335387966a04662c8f9633b8434e9635a5562e4122b8375703006cc4" protocol=ttrpc version=3 Jan 23 17:28:54.788469 systemd[1]: Started cri-containerd-8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035.scope - libcontainer container 8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035. Jan 23 17:28:54.825000 audit: BPF prog-id=193 op=LOAD Jan 23 17:28:54.825000 audit[4354]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4142 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:54.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833313165373332383131623366343334653034333435306439393335 Jan 23 17:28:54.825000 audit: BPF prog-id=194 op=LOAD Jan 23 17:28:54.825000 audit[4354]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4142 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:54.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833313165373332383131623366343334653034333435306439393335 Jan 23 17:28:54.825000 audit: BPF prog-id=194 op=UNLOAD Jan 23 17:28:54.825000 audit[4354]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:54.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833313165373332383131623366343334653034333435306439393335 Jan 23 17:28:54.825000 audit: BPF prog-id=193 op=UNLOAD Jan 23 17:28:54.825000 audit[4354]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:54.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833313165373332383131623366343334653034333435306439393335 Jan 23 17:28:54.825000 audit: BPF prog-id=195 op=LOAD Jan 23 17:28:54.825000 audit[4354]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4142 pid=4354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:28:54.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833313165373332383131623366343334653034333435306439393335 Jan 23 17:28:54.854136 containerd[2108]: time="2026-01-23T17:28:54.854073028Z" level=info msg="StartContainer for \"8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035\" returns successfully" Jan 23 17:28:55.360300 kubelet[3639]: E0123 17:28:55.358991 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:28:56.030586 containerd[2108]: time="2026-01-23T17:28:56.030519105Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:28:56.032999 systemd[1]: cri-containerd-8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035.scope: Deactivated successfully. Jan 23 17:28:56.033895 systemd[1]: cri-containerd-8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035.scope: Consumed 346ms CPU time, 184.8M memory peak, 165.9M written to disk. Jan 23 17:28:56.034736 containerd[2108]: time="2026-01-23T17:28:56.034568794Z" level=info msg="received container exit event container_id:\"8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035\" id:\"8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035\" pid:4366 exited_at:{seconds:1769189336 nanos:33539419}" Jan 23 17:28:56.037000 audit: BPF prog-id=195 op=UNLOAD Jan 23 17:28:56.041846 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 23 17:28:56.041930 kernel: audit: type=1334 audit(1769189336.037:600): prog-id=195 op=UNLOAD Jan 23 17:28:56.058240 kubelet[3639]: I0123 17:28:56.058206 3639 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:28:56.062924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8311e732811b3f434e043450d993547d02be8bc0b4b0dd181d1d34e2886b5035-rootfs.mount: Deactivated successfully. Jan 23 17:28:56.109602 systemd[1]: Created slice kubepods-burstable-pod0601b217_86e0_4d5d_8bdc_4a1067d58ca6.slice - libcontainer container kubepods-burstable-pod0601b217_86e0_4d5d_8bdc_4a1067d58ca6.slice. Jan 23 17:28:56.122575 systemd[1]: Created slice kubepods-burstable-podfc7b3f96_a778_4d13_a8d9_a43b196fdac0.slice - libcontainer container kubepods-burstable-podfc7b3f96_a778_4d13_a8d9_a43b196fdac0.slice. Jan 23 17:28:56.456663 kubelet[3639]: W0123 17:28:56.125501 3639 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4547.1.0-a-f00ee6181d" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object Jan 23 17:28:56.456663 kubelet[3639]: E0123 17:28:56.125536 3639 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4547.1.0-a-f00ee6181d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object" logger="UnhandledError" Jan 23 17:28:56.456663 kubelet[3639]: W0123 17:28:56.125616 3639 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4547.1.0-a-f00ee6181d" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object Jan 23 17:28:56.456663 kubelet[3639]: E0123 17:28:56.125627 3639 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4547.1.0-a-f00ee6181d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object" logger="UnhandledError" Jan 23 17:28:56.131105 systemd[1]: Created slice kubepods-besteffort-podc60749af_cedd_49c6_899a_24ca91720bf5.slice - libcontainer container kubepods-besteffort-podc60749af_cedd_49c6_899a_24ca91720bf5.slice. Jan 23 17:28:56.457019 kubelet[3639]: W0123 17:28:56.125654 3639 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4547.1.0-a-f00ee6181d" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object Jan 23 17:28:56.457019 kubelet[3639]: E0123 17:28:56.125662 3639 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4547.1.0-a-f00ee6181d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object" logger="UnhandledError" Jan 23 17:28:56.457019 kubelet[3639]: W0123 17:28:56.125686 3639 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4547.1.0-a-f00ee6181d" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object Jan 23 17:28:56.457019 kubelet[3639]: E0123 17:28:56.125693 3639 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4547.1.0-a-f00ee6181d\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object" logger="UnhandledError" Jan 23 17:28:56.457019 kubelet[3639]: W0123 17:28:56.125723 3639 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4547.1.0-a-f00ee6181d" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object Jan 23 17:28:56.135566 systemd[1]: Created slice kubepods-besteffort-podd6234171_70b3_48b5_98d5_2c3cd8e41f24.slice - libcontainer container kubepods-besteffort-podd6234171_70b3_48b5_98d5_2c3cd8e41f24.slice. Jan 23 17:28:56.457128 kubelet[3639]: E0123 17:28:56.125731 3639 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4547.1.0-a-f00ee6181d\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.1.0-a-f00ee6181d' and this object" logger="UnhandledError" Jan 23 17:28:56.457128 kubelet[3639]: I0123 17:28:56.235739 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6234171-70b3-48b5-98d5-2c3cd8e41f24-calico-apiserver-certs\") pod \"calico-apiserver-66f6568cfc-b7js4\" (UID: \"d6234171-70b3-48b5-98d5-2c3cd8e41f24\") " pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" Jan 23 17:28:56.457128 kubelet[3639]: I0123 17:28:56.235778 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a12545f0-91c5-4708-a845-2a7a18a8616c-config\") pod \"goldmane-666569f655-wq8fz\" (UID: \"a12545f0-91c5-4708-a845-2a7a18a8616c\") " pod="calico-system/goldmane-666569f655-wq8fz" Jan 23 17:28:56.457128 kubelet[3639]: I0123 17:28:56.235795 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc7b3f96-a778-4d13-a8d9-a43b196fdac0-config-volume\") pod \"coredns-668d6bf9bc-2tmvt\" (UID: \"fc7b3f96-a778-4d13-a8d9-a43b196fdac0\") " pod="kube-system/coredns-668d6bf9bc-2tmvt" Jan 23 17:28:56.457128 kubelet[3639]: I0123 17:28:56.235814 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a12545f0-91c5-4708-a845-2a7a18a8616c-goldmane-ca-bundle\") pod \"goldmane-666569f655-wq8fz\" (UID: \"a12545f0-91c5-4708-a845-2a7a18a8616c\") " pod="calico-system/goldmane-666569f655-wq8fz" Jan 23 17:28:56.145488 systemd[1]: Created slice kubepods-besteffort-pod1c635441_6948_4d33_9972_c0de361e6d46.slice - libcontainer container kubepods-besteffort-pod1c635441_6948_4d33_9972_c0de361e6d46.slice. Jan 23 17:28:56.457235 kubelet[3639]: I0123 17:28:56.235825 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gvjf\" (UniqueName: \"kubernetes.io/projected/c60749af-cedd-49c6-899a-24ca91720bf5-kube-api-access-5gvjf\") pod \"calico-kube-controllers-7fcbcd85c4-2prtk\" (UID: \"c60749af-cedd-49c6-899a-24ca91720bf5\") " pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" Jan 23 17:28:56.457235 kubelet[3639]: I0123 17:28:56.235836 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c635441-6948-4d33-9972-c0de361e6d46-whisker-ca-bundle\") pod \"whisker-5959b469c8-h758b\" (UID: \"1c635441-6948-4d33-9972-c0de361e6d46\") " pod="calico-system/whisker-5959b469c8-h758b" Jan 23 17:28:56.457235 kubelet[3639]: I0123 17:28:56.235880 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9xx\" (UniqueName: \"kubernetes.io/projected/1c635441-6948-4d33-9972-c0de361e6d46-kube-api-access-wk9xx\") pod \"whisker-5959b469c8-h758b\" (UID: \"1c635441-6948-4d33-9972-c0de361e6d46\") " pod="calico-system/whisker-5959b469c8-h758b" Jan 23 17:28:56.457235 kubelet[3639]: I0123 17:28:56.235933 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9607b09f-7ec4-4ed2-9e57-38044aa1d0d6-calico-apiserver-certs\") pod \"calico-apiserver-75d7f978dc-h5tcw\" (UID: \"9607b09f-7ec4-4ed2-9e57-38044aa1d0d6\") " pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" Jan 23 17:28:56.457235 kubelet[3639]: I0123 17:28:56.235946 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhrx5\" (UniqueName: \"kubernetes.io/projected/d6234171-70b3-48b5-98d5-2c3cd8e41f24-kube-api-access-nhrx5\") pod \"calico-apiserver-66f6568cfc-b7js4\" (UID: \"d6234171-70b3-48b5-98d5-2c3cd8e41f24\") " pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" Jan 23 17:28:56.151452 systemd[1]: Created slice kubepods-besteffort-poda12545f0_91c5_4708_a845_2a7a18a8616c.slice - libcontainer container kubepods-besteffort-poda12545f0_91c5_4708_a845_2a7a18a8616c.slice. Jan 23 17:28:56.457863 kubelet[3639]: I0123 17:28:56.235957 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a129fea3-ad15-412b-9854-c14f30f3a9fd-calico-apiserver-certs\") pod \"calico-apiserver-66f6568cfc-nzs8x\" (UID: \"a129fea3-ad15-412b-9854-c14f30f3a9fd\") " pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" Jan 23 17:28:56.457863 kubelet[3639]: I0123 17:28:56.235968 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jq4\" (UniqueName: \"kubernetes.io/projected/fc7b3f96-a778-4d13-a8d9-a43b196fdac0-kube-api-access-t6jq4\") pod \"coredns-668d6bf9bc-2tmvt\" (UID: \"fc7b3f96-a778-4d13-a8d9-a43b196fdac0\") " pod="kube-system/coredns-668d6bf9bc-2tmvt" Jan 23 17:28:56.457863 kubelet[3639]: I0123 17:28:56.235977 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8jrb\" (UniqueName: \"kubernetes.io/projected/a12545f0-91c5-4708-a845-2a7a18a8616c-kube-api-access-k8jrb\") pod \"goldmane-666569f655-wq8fz\" (UID: \"a12545f0-91c5-4708-a845-2a7a18a8616c\") " pod="calico-system/goldmane-666569f655-wq8fz" Jan 23 17:28:56.457863 kubelet[3639]: I0123 17:28:56.235988 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c635441-6948-4d33-9972-c0de361e6d46-whisker-backend-key-pair\") pod \"whisker-5959b469c8-h758b\" (UID: \"1c635441-6948-4d33-9972-c0de361e6d46\") " pod="calico-system/whisker-5959b469c8-h758b" Jan 23 17:28:56.457863 kubelet[3639]: I0123 17:28:56.236000 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a12545f0-91c5-4708-a845-2a7a18a8616c-goldmane-key-pair\") pod \"goldmane-666569f655-wq8fz\" (UID: \"a12545f0-91c5-4708-a845-2a7a18a8616c\") " pod="calico-system/goldmane-666569f655-wq8fz" Jan 23 17:28:56.156529 systemd[1]: Created slice kubepods-besteffort-poda129fea3_ad15_412b_9854_c14f30f3a9fd.slice - libcontainer container kubepods-besteffort-poda129fea3_ad15_412b_9854_c14f30f3a9fd.slice. Jan 23 17:28:56.457972 kubelet[3639]: I0123 17:28:56.236017 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t5xz\" (UniqueName: \"kubernetes.io/projected/a129fea3-ad15-412b-9854-c14f30f3a9fd-kube-api-access-8t5xz\") pod \"calico-apiserver-66f6568cfc-nzs8x\" (UID: \"a129fea3-ad15-412b-9854-c14f30f3a9fd\") " pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" Jan 23 17:28:56.457972 kubelet[3639]: I0123 17:28:56.236040 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0601b217-86e0-4d5d-8bdc-4a1067d58ca6-config-volume\") pod \"coredns-668d6bf9bc-cc7gn\" (UID: \"0601b217-86e0-4d5d-8bdc-4a1067d58ca6\") " pod="kube-system/coredns-668d6bf9bc-cc7gn" Jan 23 17:28:56.457972 kubelet[3639]: I0123 17:28:56.236060 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfklk\" (UniqueName: \"kubernetes.io/projected/0601b217-86e0-4d5d-8bdc-4a1067d58ca6-kube-api-access-vfklk\") pod \"coredns-668d6bf9bc-cc7gn\" (UID: \"0601b217-86e0-4d5d-8bdc-4a1067d58ca6\") " pod="kube-system/coredns-668d6bf9bc-cc7gn" Jan 23 17:28:56.457972 kubelet[3639]: I0123 17:28:56.236081 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6w2\" (UniqueName: \"kubernetes.io/projected/9607b09f-7ec4-4ed2-9e57-38044aa1d0d6-kube-api-access-kd6w2\") pod \"calico-apiserver-75d7f978dc-h5tcw\" (UID: \"9607b09f-7ec4-4ed2-9e57-38044aa1d0d6\") " pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" Jan 23 17:28:56.457972 kubelet[3639]: I0123 17:28:56.236097 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c60749af-cedd-49c6-899a-24ca91720bf5-tigera-ca-bundle\") pod \"calico-kube-controllers-7fcbcd85c4-2prtk\" (UID: \"c60749af-cedd-49c6-899a-24ca91720bf5\") " pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" Jan 23 17:28:56.163079 systemd[1]: Created slice kubepods-besteffort-pod9607b09f_7ec4_4ed2_9e57_38044aa1d0d6.slice - libcontainer container kubepods-besteffort-pod9607b09f_7ec4_4ed2_9e57_38044aa1d0d6.slice. Jan 23 17:28:56.754755 containerd[2108]: time="2026-01-23T17:28:56.754644769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cc7gn,Uid:0601b217-86e0-4d5d-8bdc-4a1067d58ca6,Namespace:kube-system,Attempt:0,}" Jan 23 17:28:56.760578 containerd[2108]: time="2026-01-23T17:28:56.760535028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-b7js4,Uid:d6234171-70b3-48b5-98d5-2c3cd8e41f24,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:28:56.760719 containerd[2108]: time="2026-01-23T17:28:56.760623041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcbcd85c4-2prtk,Uid:c60749af-cedd-49c6-899a-24ca91720bf5,Namespace:calico-system,Attempt:0,}" Jan 23 17:28:56.762379 containerd[2108]: time="2026-01-23T17:28:56.762315090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d7f978dc-h5tcw,Uid:9607b09f-7ec4-4ed2-9e57-38044aa1d0d6,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:28:56.762379 containerd[2108]: time="2026-01-23T17:28:56.762379102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-nzs8x,Uid:a129fea3-ad15-412b-9854-c14f30f3a9fd,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:28:56.762478 containerd[2108]: time="2026-01-23T17:28:56.762458234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tmvt,Uid:fc7b3f96-a778-4d13-a8d9-a43b196fdac0,Namespace:kube-system,Attempt:0,}" Jan 23 17:28:57.013879 containerd[2108]: time="2026-01-23T17:28:57.013678196Z" level=error msg="Failed to destroy network for sandbox \"64d5b623602666c4a0413db66563338ac99f8d238c1326545fd82a1b15b41864\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.080393 containerd[2108]: time="2026-01-23T17:28:57.080218886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cc7gn,Uid:0601b217-86e0-4d5d-8bdc-4a1067d58ca6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64d5b623602666c4a0413db66563338ac99f8d238c1326545fd82a1b15b41864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.082554 kubelet[3639]: E0123 17:28:57.080493 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64d5b623602666c4a0413db66563338ac99f8d238c1326545fd82a1b15b41864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.082554 kubelet[3639]: E0123 17:28:57.080580 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64d5b623602666c4a0413db66563338ac99f8d238c1326545fd82a1b15b41864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cc7gn" Jan 23 17:28:57.082554 kubelet[3639]: E0123 17:28:57.080598 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64d5b623602666c4a0413db66563338ac99f8d238c1326545fd82a1b15b41864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cc7gn" Jan 23 17:28:57.082640 kubelet[3639]: E0123 17:28:57.080638 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cc7gn_kube-system(0601b217-86e0-4d5d-8bdc-4a1067d58ca6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cc7gn_kube-system(0601b217-86e0-4d5d-8bdc-4a1067d58ca6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64d5b623602666c4a0413db66563338ac99f8d238c1326545fd82a1b15b41864\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cc7gn" podUID="0601b217-86e0-4d5d-8bdc-4a1067d58ca6" Jan 23 17:28:57.158770 containerd[2108]: time="2026-01-23T17:28:57.158036127Z" level=error msg="Failed to destroy network for sandbox \"57d8924325bf6e61822d95f0f93710bd298b7a513fbf391a15230c564efde4d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.160007 systemd[1]: run-netns-cni\x2d60bbd780\x2d5f6a\x2df15d\x2db7f6\x2d2ffa887f43ce.mount: Deactivated successfully. Jan 23 17:28:57.166243 containerd[2108]: time="2026-01-23T17:28:57.166183197Z" level=error msg="Failed to destroy network for sandbox \"aaf6d50c304f4218a2c4a1380217ad40f41e4f1f5d30588d52cc5a629053e221\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.168384 systemd[1]: run-netns-cni\x2d1f1f7e53\x2db6f4\x2de6d5\x2d8af8\x2dce886c932e95.mount: Deactivated successfully. Jan 23 17:28:57.176052 containerd[2108]: time="2026-01-23T17:28:57.176010658Z" level=error msg="Failed to destroy network for sandbox \"9ee838b0622453e9c7b968139fe4b80e9d5326870c1d1e9090f1b2afce4a9100\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.178475 systemd[1]: run-netns-cni\x2d49040b60\x2dafda\x2d656a\x2dab5a\x2d29cca27cf34a.mount: Deactivated successfully. Jan 23 17:28:57.181029 containerd[2108]: time="2026-01-23T17:28:57.180980108Z" level=error msg="Failed to destroy network for sandbox \"d34e4c503f123edd32f6adb9a017ac509db6fccacee6aa9accfe543918a8fa40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.182610 systemd[1]: run-netns-cni\x2d6138e887\x2d9d6a\x2dc79b\x2d874c\x2da3b45b2fd253.mount: Deactivated successfully. Jan 23 17:28:57.184969 containerd[2108]: time="2026-01-23T17:28:57.184701961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcbcd85c4-2prtk,Uid:c60749af-cedd-49c6-899a-24ca91720bf5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf6d50c304f4218a2c4a1380217ad40f41e4f1f5d30588d52cc5a629053e221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.185079 kubelet[3639]: E0123 17:28:57.184916 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf6d50c304f4218a2c4a1380217ad40f41e4f1f5d30588d52cc5a629053e221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.185079 kubelet[3639]: E0123 17:28:57.184967 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf6d50c304f4218a2c4a1380217ad40f41e4f1f5d30588d52cc5a629053e221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" Jan 23 17:28:57.185079 kubelet[3639]: E0123 17:28:57.184982 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf6d50c304f4218a2c4a1380217ad40f41e4f1f5d30588d52cc5a629053e221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" Jan 23 17:28:57.185162 kubelet[3639]: E0123 17:28:57.185023 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fcbcd85c4-2prtk_calico-system(c60749af-cedd-49c6-899a-24ca91720bf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fcbcd85c4-2prtk_calico-system(c60749af-cedd-49c6-899a-24ca91720bf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaf6d50c304f4218a2c4a1380217ad40f41e4f1f5d30588d52cc5a629053e221\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:28:57.190957 containerd[2108]: time="2026-01-23T17:28:57.190913312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-b7js4,Uid:d6234171-70b3-48b5-98d5-2c3cd8e41f24,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d8924325bf6e61822d95f0f93710bd298b7a513fbf391a15230c564efde4d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.191418 kubelet[3639]: E0123 17:28:57.191197 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d8924325bf6e61822d95f0f93710bd298b7a513fbf391a15230c564efde4d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.191418 kubelet[3639]: E0123 17:28:57.191243 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d8924325bf6e61822d95f0f93710bd298b7a513fbf391a15230c564efde4d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" Jan 23 17:28:57.191603 kubelet[3639]: E0123 17:28:57.191505 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d8924325bf6e61822d95f0f93710bd298b7a513fbf391a15230c564efde4d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" Jan 23 17:28:57.191603 kubelet[3639]: E0123 17:28:57.191565 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66f6568cfc-b7js4_calico-apiserver(d6234171-70b3-48b5-98d5-2c3cd8e41f24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66f6568cfc-b7js4_calico-apiserver(d6234171-70b3-48b5-98d5-2c3cd8e41f24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57d8924325bf6e61822d95f0f93710bd298b7a513fbf391a15230c564efde4d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:28:57.192243 containerd[2108]: time="2026-01-23T17:28:57.192202367Z" level=error msg="Failed to destroy network for sandbox \"4780184d7393891a8feb13e16005e0ceac91531b28e5300422d172b252fee90c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.222097 containerd[2108]: time="2026-01-23T17:28:57.222004339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tmvt,Uid:fc7b3f96-a778-4d13-a8d9-a43b196fdac0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee838b0622453e9c7b968139fe4b80e9d5326870c1d1e9090f1b2afce4a9100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.222336 kubelet[3639]: E0123 17:28:57.222289 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee838b0622453e9c7b968139fe4b80e9d5326870c1d1e9090f1b2afce4a9100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.222394 kubelet[3639]: E0123 17:28:57.222362 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee838b0622453e9c7b968139fe4b80e9d5326870c1d1e9090f1b2afce4a9100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2tmvt" Jan 23 17:28:57.222394 kubelet[3639]: E0123 17:28:57.222377 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ee838b0622453e9c7b968139fe4b80e9d5326870c1d1e9090f1b2afce4a9100\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2tmvt" Jan 23 17:28:57.222472 kubelet[3639]: E0123 17:28:57.222441 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2tmvt_kube-system(fc7b3f96-a778-4d13-a8d9-a43b196fdac0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2tmvt_kube-system(fc7b3f96-a778-4d13-a8d9-a43b196fdac0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ee838b0622453e9c7b968139fe4b80e9d5326870c1d1e9090f1b2afce4a9100\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2tmvt" podUID="fc7b3f96-a778-4d13-a8d9-a43b196fdac0" Jan 23 17:28:57.241660 containerd[2108]: time="2026-01-23T17:28:57.241599386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d7f978dc-h5tcw,Uid:9607b09f-7ec4-4ed2-9e57-38044aa1d0d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34e4c503f123edd32f6adb9a017ac509db6fccacee6aa9accfe543918a8fa40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.241908 kubelet[3639]: E0123 17:28:57.241846 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34e4c503f123edd32f6adb9a017ac509db6fccacee6aa9accfe543918a8fa40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.241908 kubelet[3639]: E0123 17:28:57.241896 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34e4c503f123edd32f6adb9a017ac509db6fccacee6aa9accfe543918a8fa40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" Jan 23 17:28:57.241984 kubelet[3639]: E0123 17:28:57.241911 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34e4c503f123edd32f6adb9a017ac509db6fccacee6aa9accfe543918a8fa40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" Jan 23 17:28:57.241984 kubelet[3639]: E0123 17:28:57.241954 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75d7f978dc-h5tcw_calico-apiserver(9607b09f-7ec4-4ed2-9e57-38044aa1d0d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75d7f978dc-h5tcw_calico-apiserver(9607b09f-7ec4-4ed2-9e57-38044aa1d0d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d34e4c503f123edd32f6adb9a017ac509db6fccacee6aa9accfe543918a8fa40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:28:57.247802 containerd[2108]: time="2026-01-23T17:28:57.247676896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-nzs8x,Uid:a129fea3-ad15-412b-9854-c14f30f3a9fd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4780184d7393891a8feb13e16005e0ceac91531b28e5300422d172b252fee90c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.248324 kubelet[3639]: E0123 17:28:57.248012 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4780184d7393891a8feb13e16005e0ceac91531b28e5300422d172b252fee90c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.248324 kubelet[3639]: E0123 17:28:57.248083 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4780184d7393891a8feb13e16005e0ceac91531b28e5300422d172b252fee90c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" Jan 23 17:28:57.248324 kubelet[3639]: E0123 17:28:57.248110 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4780184d7393891a8feb13e16005e0ceac91531b28e5300422d172b252fee90c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" Jan 23 17:28:57.248433 kubelet[3639]: E0123 17:28:57.248150 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66f6568cfc-nzs8x_calico-apiserver(a129fea3-ad15-412b-9854-c14f30f3a9fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66f6568cfc-nzs8x_calico-apiserver(a129fea3-ad15-412b-9854-c14f30f3a9fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4780184d7393891a8feb13e16005e0ceac91531b28e5300422d172b252fee90c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:28:57.337991 kubelet[3639]: E0123 17:28:57.337873 3639 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 23 17:28:57.339006 kubelet[3639]: E0123 17:28:57.338785 3639 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c635441-6948-4d33-9972-c0de361e6d46-whisker-backend-key-pair podName:1c635441-6948-4d33-9972-c0de361e6d46 nodeName:}" failed. No retries permitted until 2026-01-23 17:28:57.83876177 +0000 UTC m=+31.555641309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/1c635441-6948-4d33-9972-c0de361e6d46-whisker-backend-key-pair") pod "whisker-5959b469c8-h758b" (UID: "1c635441-6948-4d33-9972-c0de361e6d46") : failed to sync secret cache: timed out waiting for the condition Jan 23 17:28:57.342039 kubelet[3639]: E0123 17:28:57.342012 3639 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:28:57.342117 kubelet[3639]: E0123 17:28:57.342082 3639 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c635441-6948-4d33-9972-c0de361e6d46-whisker-ca-bundle podName:1c635441-6948-4d33-9972-c0de361e6d46 nodeName:}" failed. No retries permitted until 2026-01-23 17:28:57.842065758 +0000 UTC m=+31.558945296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/1c635441-6948-4d33-9972-c0de361e6d46-whisker-ca-bundle") pod "whisker-5959b469c8-h758b" (UID: "1c635441-6948-4d33-9972-c0de361e6d46") : failed to sync configmap cache: timed out waiting for the condition Jan 23 17:28:57.362229 systemd[1]: Created slice kubepods-besteffort-pod25c5832d_778b_4f5d_974d_1be8e7376fdb.slice - libcontainer container kubepods-besteffort-pod25c5832d_778b_4f5d_974d_1be8e7376fdb.slice. Jan 23 17:28:57.363103 containerd[2108]: time="2026-01-23T17:28:57.362998575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wq8fz,Uid:a12545f0-91c5-4708-a845-2a7a18a8616c,Namespace:calico-system,Attempt:0,}" Jan 23 17:28:57.365228 containerd[2108]: time="2026-01-23T17:28:57.365194767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtvct,Uid:25c5832d-778b-4f5d-974d-1be8e7376fdb,Namespace:calico-system,Attempt:0,}" Jan 23 17:28:57.437795 containerd[2108]: time="2026-01-23T17:28:57.437672911Z" level=error msg="Failed to destroy network for sandbox \"a06ab9e1f4ad636e3178e745655fa73c66bf425fc6e66a2e0c71b08a8a5540b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.441295 containerd[2108]: time="2026-01-23T17:28:57.441206016Z" level=error msg="Failed to destroy network for sandbox \"35693c8ccfe002732c59fe9afa4a5a199a08434f43e9e608292a07f956ff37d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.451387 containerd[2108]: time="2026-01-23T17:28:57.451330624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtvct,Uid:25c5832d-778b-4f5d-974d-1be8e7376fdb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35693c8ccfe002732c59fe9afa4a5a199a08434f43e9e608292a07f956ff37d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.451808 kubelet[3639]: E0123 17:28:57.451703 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35693c8ccfe002732c59fe9afa4a5a199a08434f43e9e608292a07f956ff37d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.451808 kubelet[3639]: E0123 17:28:57.451785 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35693c8ccfe002732c59fe9afa4a5a199a08434f43e9e608292a07f956ff37d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtvct" Jan 23 17:28:57.452253 kubelet[3639]: E0123 17:28:57.451899 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35693c8ccfe002732c59fe9afa4a5a199a08434f43e9e608292a07f956ff37d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtvct" Jan 23 17:28:57.452253 kubelet[3639]: E0123 17:28:57.451952 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35693c8ccfe002732c59fe9afa4a5a199a08434f43e9e608292a07f956ff37d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:28:57.454696 containerd[2108]: time="2026-01-23T17:28:57.454654389Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wq8fz,Uid:a12545f0-91c5-4708-a845-2a7a18a8616c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a06ab9e1f4ad636e3178e745655fa73c66bf425fc6e66a2e0c71b08a8a5540b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.455034 kubelet[3639]: E0123 17:28:57.454990 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a06ab9e1f4ad636e3178e745655fa73c66bf425fc6e66a2e0c71b08a8a5540b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:57.455086 kubelet[3639]: E0123 17:28:57.455055 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a06ab9e1f4ad636e3178e745655fa73c66bf425fc6e66a2e0c71b08a8a5540b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wq8fz" Jan 23 17:28:57.455086 kubelet[3639]: E0123 17:28:57.455073 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a06ab9e1f4ad636e3178e745655fa73c66bf425fc6e66a2e0c71b08a8a5540b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wq8fz" Jan 23 17:28:57.455140 kubelet[3639]: E0123 17:28:57.455108 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wq8fz_calico-system(a12545f0-91c5-4708-a845-2a7a18a8616c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wq8fz_calico-system(a12545f0-91c5-4708-a845-2a7a18a8616c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a06ab9e1f4ad636e3178e745655fa73c66bf425fc6e66a2e0c71b08a8a5540b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:28:57.489929 containerd[2108]: time="2026-01-23T17:28:57.488968614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 17:28:57.987504 containerd[2108]: time="2026-01-23T17:28:57.987464534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5959b469c8-h758b,Uid:1c635441-6948-4d33-9972-c0de361e6d46,Namespace:calico-system,Attempt:0,}" Jan 23 17:28:58.035140 containerd[2108]: time="2026-01-23T17:28:58.035077963Z" level=error msg="Failed to destroy network for sandbox \"ae5516f6f6b7dbe84eb89b126bbc488397d117658659207db133794d6ab98f19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:58.045013 containerd[2108]: time="2026-01-23T17:28:58.044934002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5959b469c8-h758b,Uid:1c635441-6948-4d33-9972-c0de361e6d46,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5516f6f6b7dbe84eb89b126bbc488397d117658659207db133794d6ab98f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:58.045379 kubelet[3639]: E0123 17:28:58.045331 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5516f6f6b7dbe84eb89b126bbc488397d117658659207db133794d6ab98f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:28:58.045679 kubelet[3639]: E0123 17:28:58.045406 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5516f6f6b7dbe84eb89b126bbc488397d117658659207db133794d6ab98f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5959b469c8-h758b" Jan 23 17:28:58.045679 kubelet[3639]: E0123 17:28:58.045426 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae5516f6f6b7dbe84eb89b126bbc488397d117658659207db133794d6ab98f19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5959b469c8-h758b" Jan 23 17:28:58.045679 kubelet[3639]: E0123 17:28:58.045484 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5959b469c8-h758b_calico-system(1c635441-6948-4d33-9972-c0de361e6d46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5959b469c8-h758b_calico-system(1c635441-6948-4d33-9972-c0de361e6d46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae5516f6f6b7dbe84eb89b126bbc488397d117658659207db133794d6ab98f19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5959b469c8-h758b" podUID="1c635441-6948-4d33-9972-c0de361e6d46" Jan 23 17:28:58.063360 systemd[1]: run-netns-cni\x2da1e31511\x2d0dd9\x2dde10\x2d8302\x2d0542cb13f558.mount: Deactivated successfully. Jan 23 17:29:01.222787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919772117.mount: Deactivated successfully. Jan 23 17:29:05.097543 containerd[2108]: time="2026-01-23T17:29:05.097432994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:29:05.253885 containerd[2108]: time="2026-01-23T17:29:05.253812786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 23 17:29:05.696467 containerd[2108]: time="2026-01-23T17:29:05.696414712Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:29:05.905879 containerd[2108]: time="2026-01-23T17:29:05.905484189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:29:05.906206 containerd[2108]: time="2026-01-23T17:29:05.906173025Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 8.416293274s" Jan 23 17:29:05.906330 containerd[2108]: time="2026-01-23T17:29:05.906310680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 17:29:05.958938 containerd[2108]: time="2026-01-23T17:29:05.958817060Z" level=info msg="CreateContainer within sandbox \"c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 17:29:08.359084 containerd[2108]: time="2026-01-23T17:29:08.359041233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-b7js4,Uid:d6234171-70b3-48b5-98d5-2c3cd8e41f24,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:29:09.357808 containerd[2108]: time="2026-01-23T17:29:09.357761457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5959b469c8-h758b,Uid:1c635441-6948-4d33-9972-c0de361e6d46,Namespace:calico-system,Attempt:0,}" Jan 23 17:29:09.357808 containerd[2108]: time="2026-01-23T17:29:09.357760905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d7f978dc-h5tcw,Uid:9607b09f-7ec4-4ed2-9e57-38044aa1d0d6,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:29:09.944889 containerd[2108]: time="2026-01-23T17:29:09.942976488Z" level=info msg="Container 59b75868fa621d0e0dc9c559cba234135a29a95bf0917db07ddafded90f243b1: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:29:09.947436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510817569.mount: Deactivated successfully. Jan 23 17:29:10.030306 containerd[2108]: time="2026-01-23T17:29:10.030234236Z" level=error msg="Failed to destroy network for sandbox \"7d38088c13a5c0cf9f357b859488e14584eee29a132542d56e53c92ef1d9e5f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.031954 systemd[1]: run-netns-cni\x2d254555bf\x2d0f48\x2d739f\x2d95c9\x2dd9c5041adb06.mount: Deactivated successfully. Jan 23 17:29:10.288049 containerd[2108]: time="2026-01-23T17:29:10.287992619Z" level=error msg="Failed to destroy network for sandbox \"ce7b2ebc8dd1e0aaa65c0b605ab42d2fa7113e4c047c387f28ae9f2df15204b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.343439 containerd[2108]: time="2026-01-23T17:29:10.343382806Z" level=error msg="Failed to destroy network for sandbox \"fcaaebdf91ca159e54e9e90f99a73c0974580efd2f7e909376863c6e06569177\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.399098 containerd[2108]: time="2026-01-23T17:29:10.398900786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d7f978dc-h5tcw,Uid:9607b09f-7ec4-4ed2-9e57-38044aa1d0d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d38088c13a5c0cf9f357b859488e14584eee29a132542d56e53c92ef1d9e5f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.446521 kubelet[3639]: E0123 17:29:10.399176 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d38088c13a5c0cf9f357b859488e14584eee29a132542d56e53c92ef1d9e5f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.446521 kubelet[3639]: E0123 17:29:10.399243 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d38088c13a5c0cf9f357b859488e14584eee29a132542d56e53c92ef1d9e5f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" Jan 23 17:29:10.446521 kubelet[3639]: E0123 17:29:10.399263 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d38088c13a5c0cf9f357b859488e14584eee29a132542d56e53c92ef1d9e5f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" Jan 23 17:29:10.446876 kubelet[3639]: E0123 17:29:10.399325 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75d7f978dc-h5tcw_calico-apiserver(9607b09f-7ec4-4ed2-9e57-38044aa1d0d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75d7f978dc-h5tcw_calico-apiserver(9607b09f-7ec4-4ed2-9e57-38044aa1d0d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d38088c13a5c0cf9f357b859488e14584eee29a132542d56e53c92ef1d9e5f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:29:10.451301 containerd[2108]: time="2026-01-23T17:29:10.451194026Z" level=info msg="CreateContainer within sandbox \"c48ec7428c73386ced5eaf179e7caf3ae28dc49431de9980a30fb88944b6fa1e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"59b75868fa621d0e0dc9c559cba234135a29a95bf0917db07ddafded90f243b1\"" Jan 23 17:29:10.452284 containerd[2108]: time="2026-01-23T17:29:10.452188521Z" level=info msg="StartContainer for \"59b75868fa621d0e0dc9c559cba234135a29a95bf0917db07ddafded90f243b1\"" Jan 23 17:29:10.454125 containerd[2108]: time="2026-01-23T17:29:10.454102441Z" level=info msg="connecting to shim 59b75868fa621d0e0dc9c559cba234135a29a95bf0917db07ddafded90f243b1" address="unix:///run/containerd/s/1353152d335387966a04662c8f9633b8434e9635a5562e4122b8375703006cc4" protocol=ttrpc version=3 Jan 23 17:29:10.478488 systemd[1]: Started cri-containerd-59b75868fa621d0e0dc9c559cba234135a29a95bf0917db07ddafded90f243b1.scope - libcontainer container 59b75868fa621d0e0dc9c559cba234135a29a95bf0917db07ddafded90f243b1. Jan 23 17:29:10.499542 containerd[2108]: time="2026-01-23T17:29:10.499460709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-b7js4,Uid:d6234171-70b3-48b5-98d5-2c3cd8e41f24,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7b2ebc8dd1e0aaa65c0b605ab42d2fa7113e4c047c387f28ae9f2df15204b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.499822 kubelet[3639]: E0123 17:29:10.499718 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7b2ebc8dd1e0aaa65c0b605ab42d2fa7113e4c047c387f28ae9f2df15204b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.499822 kubelet[3639]: E0123 17:29:10.499775 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7b2ebc8dd1e0aaa65c0b605ab42d2fa7113e4c047c387f28ae9f2df15204b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" Jan 23 17:29:10.499822 kubelet[3639]: E0123 17:29:10.499797 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7b2ebc8dd1e0aaa65c0b605ab42d2fa7113e4c047c387f28ae9f2df15204b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" Jan 23 17:29:10.499998 kubelet[3639]: E0123 17:29:10.499838 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66f6568cfc-b7js4_calico-apiserver(d6234171-70b3-48b5-98d5-2c3cd8e41f24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66f6568cfc-b7js4_calico-apiserver(d6234171-70b3-48b5-98d5-2c3cd8e41f24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce7b2ebc8dd1e0aaa65c0b605ab42d2fa7113e4c047c387f28ae9f2df15204b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:29:10.517000 audit: BPF prog-id=196 op=LOAD Jan 23 17:29:10.517000 audit[4747]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4142 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:10.540302 kernel: audit: type=1334 audit(1769189350.517:601): prog-id=196 op=LOAD Jan 23 17:29:10.540432 kernel: audit: type=1300 audit(1769189350.517:601): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4142 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:10.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539623735383638666136323164306530646339633535396362613233 Jan 23 17:29:10.557472 kernel: audit: type=1327 audit(1769189350.517:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539623735383638666136323164306530646339633535396362613233 Jan 23 17:29:10.558628 containerd[2108]: time="2026-01-23T17:29:10.558562312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5959b469c8-h758b,Uid:1c635441-6948-4d33-9972-c0de361e6d46,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcaaebdf91ca159e54e9e90f99a73c0974580efd2f7e909376863c6e06569177\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.517000 audit: BPF prog-id=197 op=LOAD Jan 23 17:29:10.559870 kubelet[3639]: E0123 17:29:10.558805 3639 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcaaebdf91ca159e54e9e90f99a73c0974580efd2f7e909376863c6e06569177\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:29:10.559870 kubelet[3639]: E0123 17:29:10.558861 3639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcaaebdf91ca159e54e9e90f99a73c0974580efd2f7e909376863c6e06569177\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5959b469c8-h758b" Jan 23 17:29:10.559870 kubelet[3639]: E0123 17:29:10.558879 3639 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcaaebdf91ca159e54e9e90f99a73c0974580efd2f7e909376863c6e06569177\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5959b469c8-h758b" Jan 23 17:29:10.559959 kubelet[3639]: E0123 17:29:10.558913 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5959b469c8-h758b_calico-system(1c635441-6948-4d33-9972-c0de361e6d46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5959b469c8-h758b_calico-system(1c635441-6948-4d33-9972-c0de361e6d46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcaaebdf91ca159e54e9e90f99a73c0974580efd2f7e909376863c6e06569177\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5959b469c8-h758b" podUID="1c635441-6948-4d33-9972-c0de361e6d46" Jan 23 17:29:10.563247 kernel: audit: type=1334 audit(1769189350.517:602): prog-id=197 op=LOAD Jan 23 17:29:10.517000 audit[4747]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4142 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:10.579567 kernel: audit: type=1300 audit(1769189350.517:602): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4142 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:10.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539623735383638666136323164306530646339633535396362613233 Jan 23 17:29:10.596932 kernel: audit: type=1327 audit(1769189350.517:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539623735383638666136323164306530646339633535396362613233 Jan 23 17:29:10.521000 audit: BPF prog-id=197 op=UNLOAD Jan 23 17:29:10.603624 kernel: audit: type=1334 audit(1769189350.521:603): prog-id=197 op=UNLOAD Jan 23 17:29:10.603720 kernel: audit: type=1300 audit(1769189350.521:603): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:10.521000 audit[4747]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:10.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539623735383638666136323164306530646339633535396362613233 Jan 23 17:29:10.638632 kernel: audit: type=1327 audit(1769189350.521:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539623735383638666136323164306530646339633535396362613233 Jan 23 17:29:10.521000 audit: BPF prog-id=196 op=UNLOAD Jan 23 17:29:10.643921 kernel: audit: type=1334 audit(1769189350.521:604): prog-id=196 op=UNLOAD Jan 23 17:29:10.521000 audit[4747]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4142 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:10.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539623735383638666136323164306530646339633535396362613233 Jan 23 17:29:10.521000 audit: BPF prog-id=198 op=LOAD Jan 23 17:29:10.521000 audit[4747]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4142 pid=4747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:10.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539623735383638666136323164306530646339633535396362613233 Jan 23 17:29:10.651365 containerd[2108]: time="2026-01-23T17:29:10.651297102Z" level=info msg="StartContainer for \"59b75868fa621d0e0dc9c559cba234135a29a95bf0917db07ddafded90f243b1\" returns successfully" Jan 23 17:29:10.802521 systemd[1]: run-netns-cni\x2d38fe3da3\x2da502\x2d3964\x2d3f0d\x2d5199aa1ba140.mount: Deactivated successfully. Jan 23 17:29:10.882816 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 17:29:10.882927 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 17:29:11.358654 containerd[2108]: time="2026-01-23T17:29:11.358413685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcbcd85c4-2prtk,Uid:c60749af-cedd-49c6-899a-24ca91720bf5,Namespace:calico-system,Attempt:0,}" Jan 23 17:29:11.358654 containerd[2108]: time="2026-01-23T17:29:11.358573895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-nzs8x,Uid:a129fea3-ad15-412b-9854-c14f30f3a9fd,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:29:11.359289 containerd[2108]: time="2026-01-23T17:29:11.359054476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cc7gn,Uid:0601b217-86e0-4d5d-8bdc-4a1067d58ca6,Namespace:kube-system,Attempt:0,}" Jan 23 17:29:11.359289 containerd[2108]: time="2026-01-23T17:29:11.359156322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wq8fz,Uid:a12545f0-91c5-4708-a845-2a7a18a8616c,Namespace:calico-system,Attempt:0,}" Jan 23 17:29:11.359513 containerd[2108]: time="2026-01-23T17:29:11.359470461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tmvt,Uid:fc7b3f96-a778-4d13-a8d9-a43b196fdac0,Namespace:kube-system,Attempt:0,}" Jan 23 17:29:11.579746 kubelet[3639]: I0123 17:29:11.579679 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9jn88" podStartSLOduration=6.277826274 podStartE2EDuration="24.579664812s" podCreationTimestamp="2026-01-23 17:28:47 +0000 UTC" firstStartedPulling="2026-01-23 17:28:47.605170635 +0000 UTC m=+21.322050181" lastFinishedPulling="2026-01-23 17:29:05.907009181 +0000 UTC m=+39.623888719" observedRunningTime="2026-01-23 17:29:11.579144077 +0000 UTC m=+45.296023743" watchObservedRunningTime="2026-01-23 17:29:11.579664812 +0000 UTC m=+45.296544358" Jan 23 17:29:11.729136 kubelet[3639]: I0123 17:29:11.729089 3639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c635441-6948-4d33-9972-c0de361e6d46-whisker-backend-key-pair\") pod \"1c635441-6948-4d33-9972-c0de361e6d46\" (UID: \"1c635441-6948-4d33-9972-c0de361e6d46\") " Jan 23 17:29:11.729136 kubelet[3639]: I0123 17:29:11.729148 3639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk9xx\" (UniqueName: \"kubernetes.io/projected/1c635441-6948-4d33-9972-c0de361e6d46-kube-api-access-wk9xx\") pod \"1c635441-6948-4d33-9972-c0de361e6d46\" (UID: \"1c635441-6948-4d33-9972-c0de361e6d46\") " Jan 23 17:29:11.729136 kubelet[3639]: I0123 17:29:11.729179 3639 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c635441-6948-4d33-9972-c0de361e6d46-whisker-ca-bundle\") pod \"1c635441-6948-4d33-9972-c0de361e6d46\" (UID: \"1c635441-6948-4d33-9972-c0de361e6d46\") " Jan 23 17:29:11.731393 kubelet[3639]: I0123 17:29:11.731288 3639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c635441-6948-4d33-9972-c0de361e6d46-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1c635441-6948-4d33-9972-c0de361e6d46" (UID: "1c635441-6948-4d33-9972-c0de361e6d46"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:29:11.733707 kubelet[3639]: I0123 17:29:11.733389 3639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c635441-6948-4d33-9972-c0de361e6d46-kube-api-access-wk9xx" (OuterVolumeSpecName: "kube-api-access-wk9xx") pod "1c635441-6948-4d33-9972-c0de361e6d46" (UID: "1c635441-6948-4d33-9972-c0de361e6d46"). InnerVolumeSpecName "kube-api-access-wk9xx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:29:11.733915 kubelet[3639]: I0123 17:29:11.733889 3639 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c635441-6948-4d33-9972-c0de361e6d46-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1c635441-6948-4d33-9972-c0de361e6d46" (UID: "1c635441-6948-4d33-9972-c0de361e6d46"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:29:11.802518 systemd[1]: var-lib-kubelet-pods-1c635441\x2d6948\x2d4d33\x2d9972\x2dc0de361e6d46-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 17:29:13.292462 kubelet[3639]: I0123 17:29:11.830143 3639 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c635441-6948-4d33-9972-c0de361e6d46-whisker-backend-key-pair\") on node \"ci-4547.1.0-a-f00ee6181d\" DevicePath \"\"" Jan 23 17:29:13.292462 kubelet[3639]: I0123 17:29:11.830179 3639 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wk9xx\" (UniqueName: \"kubernetes.io/projected/1c635441-6948-4d33-9972-c0de361e6d46-kube-api-access-wk9xx\") on node \"ci-4547.1.0-a-f00ee6181d\" DevicePath \"\"" Jan 23 17:29:13.292462 kubelet[3639]: I0123 17:29:11.830188 3639 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c635441-6948-4d33-9972-c0de361e6d46-whisker-ca-bundle\") on node \"ci-4547.1.0-a-f00ee6181d\" DevicePath \"\"" Jan 23 17:29:13.292462 kubelet[3639]: I0123 17:29:12.734604 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxcwj\" (UniqueName: \"kubernetes.io/projected/6c109b4b-3504-4b89-94ff-4a8e2ba3506a-kube-api-access-vxcwj\") pod \"whisker-7665dd49cb-d8kt6\" (UID: \"6c109b4b-3504-4b89-94ff-4a8e2ba3506a\") " pod="calico-system/whisker-7665dd49cb-d8kt6" Jan 23 17:29:13.292462 kubelet[3639]: I0123 17:29:12.734645 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c109b4b-3504-4b89-94ff-4a8e2ba3506a-whisker-backend-key-pair\") pod \"whisker-7665dd49cb-d8kt6\" (UID: \"6c109b4b-3504-4b89-94ff-4a8e2ba3506a\") " pod="calico-system/whisker-7665dd49cb-d8kt6" Jan 23 17:29:13.292462 kubelet[3639]: I0123 17:29:12.734668 3639 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c109b4b-3504-4b89-94ff-4a8e2ba3506a-whisker-ca-bundle\") pod \"whisker-7665dd49cb-d8kt6\" (UID: \"6c109b4b-3504-4b89-94ff-4a8e2ba3506a\") " pod="calico-system/whisker-7665dd49cb-d8kt6" Jan 23 17:29:13.292911 containerd[2108]: time="2026-01-23T17:29:12.357547851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtvct,Uid:25c5832d-778b-4f5d-974d-1be8e7376fdb,Namespace:calico-system,Attempt:0,}" Jan 23 17:29:13.292911 containerd[2108]: time="2026-01-23T17:29:12.930759765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7665dd49cb-d8kt6,Uid:6c109b4b-3504-4b89-94ff-4a8e2ba3506a,Namespace:calico-system,Attempt:0,}" Jan 23 17:29:11.802604 systemd[1]: var-lib-kubelet-pods-1c635441\x2d6948\x2d4d33\x2d9972\x2dc0de361e6d46-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwk9xx.mount: Deactivated successfully. Jan 23 17:29:12.362068 systemd[1]: Removed slice kubepods-besteffort-pod1c635441_6948_4d33_9972_c0de361e6d46.slice - libcontainer container kubepods-besteffort-pod1c635441_6948_4d33_9972_c0de361e6d46.slice. Jan 23 17:29:12.626159 systemd[1]: Created slice kubepods-besteffort-pod6c109b4b_3504_4b89_94ff_4a8e2ba3506a.slice - libcontainer container kubepods-besteffort-pod6c109b4b_3504_4b89_94ff_4a8e2ba3506a.slice. Jan 23 17:29:13.309851 systemd-networkd[1692]: cali07d70eaaf10: Link UP Jan 23 17:29:13.310372 systemd-networkd[1692]: cali07d70eaaf10: Gained carrier Jan 23 17:29:13.328768 containerd[2108]: 2026-01-23 17:29:11.718 [INFO][4804] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:29:13.328768 containerd[2108]: 2026-01-23 17:29:11.753 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0 goldmane-666569f655- calico-system a12545f0-91c5-4708-a845-2a7a18a8616c 815 0 2026-01-23 17:28:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d goldmane-666569f655-wq8fz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali07d70eaaf10 [] [] }} ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Namespace="calico-system" Pod="goldmane-666569f655-wq8fz" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-" Jan 23 17:29:13.328768 containerd[2108]: 2026-01-23 17:29:11.754 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Namespace="calico-system" Pod="goldmane-666569f655-wq8fz" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" Jan 23 17:29:13.328768 containerd[2108]: 2026-01-23 17:29:11.772 [INFO][4820] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" HandleID="k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Workload="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.772 [INFO][4820] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" HandleID="k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Workload="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"goldmane-666569f655-wq8fz", "timestamp":"2026-01-23 17:29:11.772368018 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.772 [INFO][4820] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.772 [INFO][4820] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.772 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.777 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.781 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.784 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.785 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.329146 containerd[2108]: 2026-01-23 17:29:11.786 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.330209 containerd[2108]: 2026-01-23 17:29:11.787 [INFO][4820] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.330209 containerd[2108]: 2026-01-23 17:29:11.788 [INFO][4820] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9 Jan 23 17:29:13.330209 containerd[2108]: 2026-01-23 17:29:11.793 [INFO][4820] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.330209 containerd[2108]: 2026-01-23 17:29:12.306 [INFO][4820] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.193/26] block=192.168.105.192/26 handle="k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.330209 containerd[2108]: 2026-01-23 17:29:12.306 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.193/26] handle="k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.330209 containerd[2108]: 2026-01-23 17:29:12.306 [INFO][4820] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:13.330209 containerd[2108]: 2026-01-23 17:29:12.306 [INFO][4820] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.193/26] IPv6=[] ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" HandleID="k8s-pod-network.35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Workload="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" Jan 23 17:29:13.330410 containerd[2108]: 2026-01-23 17:29:12.309 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Namespace="calico-system" Pod="goldmane-666569f655-wq8fz" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a12545f0-91c5-4708-a845-2a7a18a8616c", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"goldmane-666569f655-wq8fz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07d70eaaf10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:13.330469 containerd[2108]: 2026-01-23 17:29:12.309 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.193/32] ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Namespace="calico-system" Pod="goldmane-666569f655-wq8fz" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" Jan 23 17:29:13.330469 containerd[2108]: 2026-01-23 17:29:12.309 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07d70eaaf10 ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Namespace="calico-system" Pod="goldmane-666569f655-wq8fz" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" Jan 23 17:29:13.330469 containerd[2108]: 2026-01-23 17:29:13.310 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Namespace="calico-system" Pod="goldmane-666569f655-wq8fz" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" Jan 23 17:29:13.330610 containerd[2108]: 2026-01-23 17:29:13.311 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Namespace="calico-system" Pod="goldmane-666569f655-wq8fz" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a12545f0-91c5-4708-a845-2a7a18a8616c", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9", Pod:"goldmane-666569f655-wq8fz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.105.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali07d70eaaf10", MAC:"de:82:1d:2b:4e:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:13.330651 containerd[2108]: 2026-01-23 17:29:13.324 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" Namespace="calico-system" Pod="goldmane-666569f655-wq8fz" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-goldmane--666569f655--wq8fz-eth0" Jan 23 17:29:13.534114 systemd-networkd[1692]: calided362a413a: Link UP Jan 23 17:29:13.535362 systemd-networkd[1692]: calided362a413a: Gained carrier Jan 23 17:29:13.563361 containerd[2108]: 2026-01-23 17:29:13.431 [INFO][4889] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:29:13.563361 containerd[2108]: 2026-01-23 17:29:13.440 [INFO][4889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0 calico-kube-controllers-7fcbcd85c4- calico-system c60749af-cedd-49c6-899a-24ca91720bf5 811 0 2026-01-23 17:28:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fcbcd85c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d calico-kube-controllers-7fcbcd85c4-2prtk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calided362a413a [] [] }} ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Namespace="calico-system" Pod="calico-kube-controllers-7fcbcd85c4-2prtk" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-" Jan 23 17:29:13.563361 containerd[2108]: 2026-01-23 17:29:13.440 [INFO][4889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Namespace="calico-system" Pod="calico-kube-controllers-7fcbcd85c4-2prtk" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" Jan 23 17:29:13.563361 containerd[2108]: 2026-01-23 17:29:13.482 [INFO][4921] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" HandleID="k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.482 [INFO][4921] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" HandleID="k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"calico-kube-controllers-7fcbcd85c4-2prtk", "timestamp":"2026-01-23 17:29:13.48200773 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.482 [INFO][4921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.482 [INFO][4921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.482 [INFO][4921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.489 [INFO][4921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.496 [INFO][4921] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.501 [INFO][4921] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.503 [INFO][4921] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564291 containerd[2108]: 2026-01-23 17:29:13.506 [INFO][4921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564782 containerd[2108]: 2026-01-23 17:29:13.506 [INFO][4921] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564782 containerd[2108]: 2026-01-23 17:29:13.507 [INFO][4921] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081 Jan 23 17:29:13.564782 containerd[2108]: 2026-01-23 17:29:13.515 [INFO][4921] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564782 containerd[2108]: 2026-01-23 17:29:13.525 [INFO][4921] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.194/26] block=192.168.105.192/26 handle="k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564782 containerd[2108]: 2026-01-23 17:29:13.525 [INFO][4921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.194/26] handle="k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.564782 containerd[2108]: 2026-01-23 17:29:13.525 [INFO][4921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:13.564782 containerd[2108]: 2026-01-23 17:29:13.526 [INFO][4921] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.194/26] IPv6=[] ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" HandleID="k8s-pod-network.2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" Jan 23 17:29:13.565038 containerd[2108]: 2026-01-23 17:29:13.528 [INFO][4889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Namespace="calico-system" Pod="calico-kube-controllers-7fcbcd85c4-2prtk" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0", GenerateName:"calico-kube-controllers-7fcbcd85c4-", Namespace:"calico-system", SelfLink:"", UID:"c60749af-cedd-49c6-899a-24ca91720bf5", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcbcd85c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"calico-kube-controllers-7fcbcd85c4-2prtk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calided362a413a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:13.565115 containerd[2108]: 2026-01-23 17:29:13.528 [INFO][4889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.194/32] ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Namespace="calico-system" Pod="calico-kube-controllers-7fcbcd85c4-2prtk" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" Jan 23 17:29:13.565115 containerd[2108]: 2026-01-23 17:29:13.529 [INFO][4889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calided362a413a ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Namespace="calico-system" Pod="calico-kube-controllers-7fcbcd85c4-2prtk" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" Jan 23 17:29:13.565115 containerd[2108]: 2026-01-23 17:29:13.536 [INFO][4889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Namespace="calico-system" Pod="calico-kube-controllers-7fcbcd85c4-2prtk" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" Jan 23 17:29:13.565317 containerd[2108]: 2026-01-23 17:29:13.537 [INFO][4889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Namespace="calico-system" Pod="calico-kube-controllers-7fcbcd85c4-2prtk" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0", GenerateName:"calico-kube-controllers-7fcbcd85c4-", Namespace:"calico-system", SelfLink:"", UID:"c60749af-cedd-49c6-899a-24ca91720bf5", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcbcd85c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081", Pod:"calico-kube-controllers-7fcbcd85c4-2prtk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calided362a413a", MAC:"d6:f5:f6:b1:e8:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:13.565620 containerd[2108]: 2026-01-23 17:29:13.560 [INFO][4889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" Namespace="calico-system" Pod="calico-kube-controllers-7fcbcd85c4-2prtk" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--kube--controllers--7fcbcd85c4--2prtk-eth0" Jan 23 17:29:13.756000 audit: BPF prog-id=199 op=LOAD Jan 23 17:29:13.756000 audit[4984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7f1fff8 a2=98 a3=ffffc7f1ffe8 items=0 ppid=4901 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.756000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:29:13.757000 audit: BPF prog-id=199 op=UNLOAD Jan 23 17:29:13.757000 audit[4984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc7f1ffc8 a3=0 items=0 ppid=4901 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:29:13.757000 audit: BPF prog-id=200 op=LOAD Jan 23 17:29:13.757000 audit[4984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7f1fea8 a2=74 a3=95 items=0 ppid=4901 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:29:13.757000 audit: BPF prog-id=200 op=UNLOAD Jan 23 17:29:13.757000 audit[4984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4901 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:29:13.757000 audit: BPF prog-id=201 op=LOAD Jan 23 17:29:13.757000 audit[4984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7f1fed8 a2=40 a3=ffffc7f1ff08 items=0 ppid=4901 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:29:13.757000 audit: BPF prog-id=201 op=UNLOAD Jan 23 17:29:13.757000 audit[4984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc7f1ff08 items=0 ppid=4901 pid=4984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:29:13.758000 audit: BPF prog-id=202 op=LOAD Jan 23 17:29:13.758000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9d5caf8 a2=98 a3=ffffc9d5cae8 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.758000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.758000 audit: BPF prog-id=202 op=UNLOAD Jan 23 17:29:13.758000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc9d5cac8 a3=0 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.758000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.759000 audit: BPF prog-id=203 op=LOAD Jan 23 17:29:13.759000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9d5c788 a2=74 a3=95 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.759000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.759000 audit: BPF prog-id=203 op=UNLOAD Jan 23 17:29:13.759000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.759000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.759000 audit: BPF prog-id=204 op=LOAD Jan 23 17:29:13.759000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9d5c7e8 a2=94 a3=2 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.759000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.759000 audit: BPF prog-id=204 op=UNLOAD Jan 23 17:29:13.759000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.759000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.842000 audit: BPF prog-id=205 op=LOAD Jan 23 17:29:13.842000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9d5c7a8 a2=40 a3=ffffc9d5c7d8 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.842000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.843000 audit: BPF prog-id=205 op=UNLOAD Jan 23 17:29:13.843000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc9d5c7d8 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.843000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.850000 audit: BPF prog-id=206 op=LOAD Jan 23 17:29:13.850000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc9d5c7b8 a2=94 a3=4 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.850000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.851000 audit: BPF prog-id=206 op=UNLOAD Jan 23 17:29:13.851000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.851000 audit: BPF prog-id=207 op=LOAD Jan 23 17:29:13.851000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc9d5c5f8 a2=94 a3=5 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.851000 audit: BPF prog-id=207 op=UNLOAD Jan 23 17:29:13.851000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.851000 audit: BPF prog-id=208 op=LOAD Jan 23 17:29:13.851000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc9d5c828 a2=94 a3=6 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.851000 audit: BPF prog-id=208 op=UNLOAD Jan 23 17:29:13.851000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.851000 audit: BPF prog-id=209 op=LOAD Jan 23 17:29:13.851000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc9d5bff8 a2=94 a3=83 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.851000 audit: BPF prog-id=210 op=LOAD Jan 23 17:29:13.851000 audit[4985]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc9d5bdb8 a2=94 a3=2 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.851000 audit: BPF prog-id=210 op=UNLOAD Jan 23 17:29:13.851000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.852000 audit: BPF prog-id=209 op=UNLOAD Jan 23 17:29:13.852000 audit[4985]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=25736620 a3=25729b00 items=0 ppid=4901 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:29:13.865000 audit: BPF prog-id=211 op=LOAD Jan 23 17:29:13.865000 audit[5008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdbd63f28 a2=98 a3=ffffdbd63f18 items=0 ppid=4901 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:29:13.865000 audit: BPF prog-id=211 op=UNLOAD Jan 23 17:29:13.865000 audit[5008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdbd63ef8 a3=0 items=0 ppid=4901 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:29:13.865000 audit: BPF prog-id=212 op=LOAD Jan 23 17:29:13.865000 audit[5008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdbd63dd8 a2=74 a3=95 items=0 ppid=4901 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:29:13.865000 audit: BPF prog-id=212 op=UNLOAD Jan 23 17:29:13.865000 audit[5008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4901 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:29:13.865000 audit: BPF prog-id=213 op=LOAD Jan 23 17:29:13.865000 audit[5008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdbd63e08 a2=40 a3=ffffdbd63e38 items=0 ppid=4901 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:29:13.865000 audit: BPF prog-id=213 op=UNLOAD Jan 23 17:29:13.865000 audit[5008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdbd63e38 items=0 ppid=4901 pid=5008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:13.865000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:29:13.892241 systemd-networkd[1692]: cali9f8b99b9d20: Link UP Jan 23 17:29:13.892406 systemd-networkd[1692]: cali9f8b99b9d20: Gained carrier Jan 23 17:29:13.910505 containerd[2108]: 2026-01-23 17:29:13.824 [INFO][4986] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0 calico-apiserver-66f6568cfc- calico-apiserver a129fea3-ad15-412b-9854-c14f30f3a9fd 816 0 2026-01-23 17:28:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66f6568cfc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d calico-apiserver-66f6568cfc-nzs8x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9f8b99b9d20 [] [] }} ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-nzs8x" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-" Jan 23 17:29:13.910505 containerd[2108]: 2026-01-23 17:29:13.824 [INFO][4986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-nzs8x" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" Jan 23 17:29:13.910505 containerd[2108]: 2026-01-23 17:29:13.849 [INFO][4999] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" HandleID="k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.850 [INFO][4999] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" HandleID="k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"calico-apiserver-66f6568cfc-nzs8x", "timestamp":"2026-01-23 17:29:13.849889305 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.850 [INFO][4999] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.850 [INFO][4999] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.850 [INFO][4999] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.856 [INFO][4999] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.859 [INFO][4999] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.863 [INFO][4999] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.864 [INFO][4999] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910715 containerd[2108]: 2026-01-23 17:29:13.866 [INFO][4999] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910861 containerd[2108]: 2026-01-23 17:29:13.867 [INFO][4999] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910861 containerd[2108]: 2026-01-23 17:29:13.871 [INFO][4999] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de Jan 23 17:29:13.910861 containerd[2108]: 2026-01-23 17:29:13.880 [INFO][4999] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910861 containerd[2108]: 2026-01-23 17:29:13.887 [INFO][4999] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.195/26] block=192.168.105.192/26 handle="k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910861 containerd[2108]: 2026-01-23 17:29:13.887 [INFO][4999] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.195/26] handle="k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:13.910861 containerd[2108]: 2026-01-23 17:29:13.887 [INFO][4999] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:13.910861 containerd[2108]: 2026-01-23 17:29:13.887 [INFO][4999] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.195/26] IPv6=[] ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" HandleID="k8s-pod-network.7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" Jan 23 17:29:13.910951 containerd[2108]: 2026-01-23 17:29:13.889 [INFO][4986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-nzs8x" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0", GenerateName:"calico-apiserver-66f6568cfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a129fea3-ad15-412b-9854-c14f30f3a9fd", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66f6568cfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"calico-apiserver-66f6568cfc-nzs8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f8b99b9d20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:13.910982 containerd[2108]: 2026-01-23 17:29:13.889 [INFO][4986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.195/32] ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-nzs8x" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" Jan 23 17:29:13.910982 containerd[2108]: 2026-01-23 17:29:13.889 [INFO][4986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f8b99b9d20 ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-nzs8x" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" Jan 23 17:29:13.910982 containerd[2108]: 2026-01-23 17:29:13.893 [INFO][4986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-nzs8x" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" Jan 23 17:29:13.911026 containerd[2108]: 2026-01-23 17:29:13.893 [INFO][4986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-nzs8x" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0", GenerateName:"calico-apiserver-66f6568cfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"a129fea3-ad15-412b-9854-c14f30f3a9fd", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66f6568cfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de", Pod:"calico-apiserver-66f6568cfc-nzs8x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9f8b99b9d20", MAC:"76:bd:19:e1:73:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:13.911058 containerd[2108]: 2026-01-23 17:29:13.907 [INFO][4986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-nzs8x" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--nzs8x-eth0" Jan 23 17:29:14.575845 systemd-networkd[1692]: cali1bdab091d30: Link UP Jan 23 17:29:14.576820 systemd-networkd[1692]: cali1bdab091d30: Gained carrier Jan 23 17:29:14.597065 containerd[2108]: 2026-01-23 17:29:14.516 [INFO][5026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0 coredns-668d6bf9bc- kube-system 0601b217-86e0-4d5d-8bdc-4a1067d58ca6 804 0 2026-01-23 17:28:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d coredns-668d6bf9bc-cc7gn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1bdab091d30 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-cc7gn" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-" Jan 23 17:29:14.597065 containerd[2108]: 2026-01-23 17:29:14.516 [INFO][5026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-cc7gn" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" Jan 23 17:29:14.597065 containerd[2108]: 2026-01-23 17:29:14.539 [INFO][5039] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" HandleID="k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Workload="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.539 [INFO][5039] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" HandleID="k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Workload="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"coredns-668d6bf9bc-cc7gn", "timestamp":"2026-01-23 17:29:14.539341101 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.539 [INFO][5039] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.539 [INFO][5039] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.539 [INFO][5039] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.545 [INFO][5039] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.548 [INFO][5039] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.551 [INFO][5039] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.552 [INFO][5039] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597465 containerd[2108]: 2026-01-23 17:29:14.554 [INFO][5039] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597617 containerd[2108]: 2026-01-23 17:29:14.554 [INFO][5039] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597617 containerd[2108]: 2026-01-23 17:29:14.556 [INFO][5039] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2 Jan 23 17:29:14.597617 containerd[2108]: 2026-01-23 17:29:14.561 [INFO][5039] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597617 containerd[2108]: 2026-01-23 17:29:14.570 [INFO][5039] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.196/26] block=192.168.105.192/26 handle="k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597617 containerd[2108]: 2026-01-23 17:29:14.570 [INFO][5039] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.196/26] handle="k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:14.597617 containerd[2108]: 2026-01-23 17:29:14.570 [INFO][5039] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:14.597617 containerd[2108]: 2026-01-23 17:29:14.570 [INFO][5039] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.196/26] IPv6=[] ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" HandleID="k8s-pod-network.e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Workload="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" Jan 23 17:29:14.597713 containerd[2108]: 2026-01-23 17:29:14.572 [INFO][5026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-cc7gn" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0601b217-86e0-4d5d-8bdc-4a1067d58ca6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"coredns-668d6bf9bc-cc7gn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bdab091d30", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:14.597713 containerd[2108]: 2026-01-23 17:29:14.572 [INFO][5026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.196/32] ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-cc7gn" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" Jan 23 17:29:14.597713 containerd[2108]: 2026-01-23 17:29:14.572 [INFO][5026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bdab091d30 ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-cc7gn" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" Jan 23 17:29:14.597713 containerd[2108]: 2026-01-23 17:29:14.577 [INFO][5026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-cc7gn" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" Jan 23 17:29:14.597713 containerd[2108]: 2026-01-23 17:29:14.577 [INFO][5026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-cc7gn" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0601b217-86e0-4d5d-8bdc-4a1067d58ca6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2", Pod:"coredns-668d6bf9bc-cc7gn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1bdab091d30", MAC:"3a:a6:f9:ee:95:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:14.597713 containerd[2108]: 2026-01-23 17:29:14.594 [INFO][5026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" Namespace="kube-system" Pod="coredns-668d6bf9bc-cc7gn" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--cc7gn-eth0" Jan 23 17:29:14.722412 systemd-networkd[1692]: cali07d70eaaf10: Gained IPv6LL Jan 23 17:29:15.005517 kubelet[3639]: I0123 17:29:15.005465 3639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c635441-6948-4d33-9972-c0de361e6d46" path="/var/lib/kubelet/pods/1c635441-6948-4d33-9972-c0de361e6d46/volumes" Jan 23 17:29:15.117049 systemd-networkd[1692]: vxlan.calico: Link UP Jan 23 17:29:15.118680 systemd-networkd[1692]: vxlan.calico: Gained carrier Jan 23 17:29:15.139000 audit: BPF prog-id=214 op=LOAD Jan 23 17:29:15.139000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb18f348 a2=98 a3=fffffb18f338 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.139000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.139000 audit: BPF prog-id=214 op=UNLOAD Jan 23 17:29:15.139000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffb18f318 a3=0 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.139000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.140000 audit: BPF prog-id=215 op=LOAD Jan 23 17:29:15.140000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb18f028 a2=74 a3=95 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.140000 audit: BPF prog-id=215 op=UNLOAD Jan 23 17:29:15.140000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.140000 audit: BPF prog-id=216 op=LOAD Jan 23 17:29:15.140000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffb18f088 a2=94 a3=2 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.140000 audit: BPF prog-id=216 op=UNLOAD Jan 23 17:29:15.140000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.140000 audit: BPF prog-id=217 op=LOAD Jan 23 17:29:15.140000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb18ef08 a2=40 a3=fffffb18ef38 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.140000 audit: BPF prog-id=217 op=UNLOAD Jan 23 17:29:15.140000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffffb18ef38 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.140000 audit: BPF prog-id=218 op=LOAD Jan 23 17:29:15.140000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb18f058 a2=94 a3=b7 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.140000 audit: BPF prog-id=218 op=UNLOAD Jan 23 17:29:15.140000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.143000 audit: BPF prog-id=219 op=LOAD Jan 23 17:29:15.143000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb18e708 a2=94 a3=2 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.143000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.143000 audit: BPF prog-id=219 op=UNLOAD Jan 23 17:29:15.143000 audit[5067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.143000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.143000 audit: BPF prog-id=220 op=LOAD Jan 23 17:29:15.143000 audit[5067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffb18e898 a2=94 a3=30 items=0 ppid=4901 pid=5067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.143000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:29:15.147000 audit: BPF prog-id=221 op=LOAD Jan 23 17:29:15.147000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd44ff918 a2=98 a3=ffffd44ff908 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.147000 audit: BPF prog-id=221 op=UNLOAD Jan 23 17:29:15.147000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd44ff8e8 a3=0 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.147000 audit: BPF prog-id=222 op=LOAD Jan 23 17:29:15.147000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd44ff5a8 a2=74 a3=95 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.147000 audit: BPF prog-id=222 op=UNLOAD Jan 23 17:29:15.147000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.147000 audit: BPF prog-id=223 op=LOAD Jan 23 17:29:15.147000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd44ff608 a2=94 a3=2 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.147000 audit: BPF prog-id=223 op=UNLOAD Jan 23 17:29:15.147000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.170449 systemd-networkd[1692]: cali9f8b99b9d20: Gained IPv6LL Jan 23 17:29:15.172035 systemd-networkd[1692]: calided362a413a: Gained IPv6LL Jan 23 17:29:15.261000 audit: BPF prog-id=224 op=LOAD Jan 23 17:29:15.261000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd44ff5c8 a2=40 a3=ffffd44ff5f8 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.261000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.262000 audit: BPF prog-id=224 op=UNLOAD Jan 23 17:29:15.262000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd44ff5f8 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.262000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=225 op=LOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd44ff5d8 a2=94 a3=4 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=225 op=UNLOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=226 op=LOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd44ff418 a2=94 a3=5 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=226 op=UNLOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=227 op=LOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd44ff648 a2=94 a3=6 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=227 op=UNLOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=228 op=LOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd44fee18 a2=94 a3=83 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=229 op=LOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=8 a0=5 a1=ffffd44febd8 a2=94 a3=2 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.273000 audit: BPF prog-id=229 op=UNLOAD Jan 23 17:29:15.273000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=8 a1=57156c a2=c a3=0 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.273000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.274000 audit: BPF prog-id=228 op=UNLOAD Jan 23 17:29:15.274000 audit[5070]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=497b620 a3=496eb00 items=0 ppid=4901 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.274000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:29:15.281000 audit: BPF prog-id=220 op=UNLOAD Jan 23 17:29:15.281000 audit[4901]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400079fec0 a2=0 a3=0 items=0 ppid=4830 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:15.281000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 17:29:15.335766 systemd-networkd[1692]: calid7c58dcf3ee: Link UP Jan 23 17:29:15.338114 systemd-networkd[1692]: calid7c58dcf3ee: Gained carrier Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.273 [INFO][5073] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0 coredns-668d6bf9bc- kube-system fc7b3f96-a778-4d13-a8d9-a43b196fdac0 818 0 2026-01-23 17:28:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d coredns-668d6bf9bc-2tmvt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7c58dcf3ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tmvt" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.273 [INFO][5073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tmvt" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.295 [INFO][5087] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" HandleID="k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Workload="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.295 [INFO][5087] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" HandleID="k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Workload="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b090), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"coredns-668d6bf9bc-2tmvt", "timestamp":"2026-01-23 17:29:15.295546838 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.295 [INFO][5087] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.295 [INFO][5087] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.295 [INFO][5087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.301 [INFO][5087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.307 [INFO][5087] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.311 [INFO][5087] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.312 [INFO][5087] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.314 [INFO][5087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.314 [INFO][5087] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.315 [INFO][5087] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65 Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.320 [INFO][5087] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.330 [INFO][5087] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.197/26] block=192.168.105.192/26 handle="k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.330 [INFO][5087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.197/26] handle="k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.331 [INFO][5087] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:15.358729 containerd[2108]: 2026-01-23 17:29:15.331 [INFO][5087] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.197/26] IPv6=[] ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" HandleID="k8s-pod-network.57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Workload="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" Jan 23 17:29:15.359400 containerd[2108]: 2026-01-23 17:29:15.333 [INFO][5073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tmvt" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fc7b3f96-a778-4d13-a8d9-a43b196fdac0", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"coredns-668d6bf9bc-2tmvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7c58dcf3ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:15.359400 containerd[2108]: 2026-01-23 17:29:15.333 [INFO][5073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.197/32] ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tmvt" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" Jan 23 17:29:15.359400 containerd[2108]: 2026-01-23 17:29:15.333 [INFO][5073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7c58dcf3ee ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tmvt" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" Jan 23 17:29:15.359400 containerd[2108]: 2026-01-23 17:29:15.338 [INFO][5073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tmvt" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" Jan 23 17:29:15.359400 containerd[2108]: 2026-01-23 17:29:15.338 [INFO][5073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tmvt" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fc7b3f96-a778-4d13-a8d9-a43b196fdac0", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65", Pod:"coredns-668d6bf9bc-2tmvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7c58dcf3ee", MAC:"36:76:7c:00:79:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:15.359400 containerd[2108]: 2026-01-23 17:29:15.356 [INFO][5073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tmvt" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-coredns--668d6bf9bc--2tmvt-eth0" Jan 23 17:29:15.746452 systemd-networkd[1692]: cali1bdab091d30: Gained IPv6LL Jan 23 17:29:16.024360 kernel: kauditd_printk_skb: 191 callbacks suppressed Jan 23 17:29:16.024511 kernel: audit: type=1325 audit(1769189356.019:668): table=nat:124 family=2 entries=15 op=nft_register_chain pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.019000 audit[5122]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.019000 audit[5122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffff9464380 a2=0 a3=ffff8bce4fa8 items=0 ppid=4901 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:16.050089 kernel: audit: type=1300 audit(1769189356.019:668): arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffff9464380 a2=0 a3=ffff8bce4fa8 items=0 ppid=4901 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:16.062372 kernel: audit: type=1327 audit(1769189356.019:668): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:16.019000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:16.022000 audit[5123]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.072647 kernel: audit: type=1325 audit(1769189356.022:669): table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5123 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.022000 audit[5123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffcd7ab610 a2=0 a3=ffffac7b9fa8 items=0 ppid=4901 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:16.095122 kernel: audit: type=1300 audit(1769189356.022:669): arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffcd7ab610 a2=0 a3=ffffac7b9fa8 items=0 ppid=4901 pid=5123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:16.022000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:16.107357 kernel: audit: type=1327 audit(1769189356.022:669): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:16.114000 audit[5121]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5121 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.114000 audit[5121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd8295d60 a2=0 a3=ffffb6918fa8 items=0 ppid=4901 pid=5121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:16.142638 kernel: audit: type=1325 audit(1769189356.114:670): table=raw:126 family=2 entries=21 op=nft_register_chain pid=5121 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.142841 kernel: audit: type=1300 audit(1769189356.114:670): arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd8295d60 a2=0 a3=ffffb6918fa8 items=0 ppid=4901 pid=5121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:16.114000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:16.154479 kernel: audit: type=1327 audit(1769189356.114:670): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:16.195457 systemd-networkd[1692]: vxlan.calico: Gained IPv6LL Jan 23 17:29:16.116000 audit[5124]: NETFILTER_CFG table=filter:127 family=2 entries=185 op=nft_register_chain pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.116000 audit[5124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=105860 a0=3 a1=fffffff7d450 a2=0 a3=ffffa17cffa8 items=0 ppid=4901 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:16.116000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:16.218299 kernel: audit: type=1325 audit(1769189356.116:671): table=filter:127 family=2 entries=185 op=nft_register_chain pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.232000 audit[5135]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:16.232000 audit[5135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24384 a0=3 a1=ffffeb3729f0 a2=0 a3=ffffbaa83fa8 items=0 ppid=4901 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:16.232000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:16.514422 systemd-networkd[1692]: calid7c58dcf3ee: Gained IPv6LL Jan 23 17:29:19.754709 systemd-networkd[1692]: cali54544286f16: Link UP Jan 23 17:29:19.756772 systemd-networkd[1692]: cali54544286f16: Gained carrier Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.685 [INFO][5137] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0 csi-node-driver- calico-system 25c5832d-778b-4f5d-974d-1be8e7376fdb 702 0 2026-01-23 17:28:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d csi-node-driver-dtvct eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali54544286f16 [] [] }} ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Namespace="calico-system" Pod="csi-node-driver-dtvct" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.685 [INFO][5137] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Namespace="calico-system" Pod="csi-node-driver-dtvct" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.706 [INFO][5151] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" HandleID="k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Workload="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.706 [INFO][5151] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" HandleID="k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Workload="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"csi-node-driver-dtvct", "timestamp":"2026-01-23 17:29:19.706463289 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.706 [INFO][5151] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.706 [INFO][5151] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.706 [INFO][5151] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.713 [INFO][5151] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.717 [INFO][5151] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.723 [INFO][5151] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.724 [INFO][5151] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.728 [INFO][5151] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.729 [INFO][5151] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.730 [INFO][5151] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.736 [INFO][5151] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.750 [INFO][5151] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.198/26] block=192.168.105.192/26 handle="k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.750 [INFO][5151] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.198/26] handle="k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.750 [INFO][5151] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:19.775557 containerd[2108]: 2026-01-23 17:29:19.750 [INFO][5151] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.198/26] IPv6=[] ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" HandleID="k8s-pod-network.52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Workload="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" Jan 23 17:29:19.777116 containerd[2108]: 2026-01-23 17:29:19.751 [INFO][5137] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Namespace="calico-system" Pod="csi-node-driver-dtvct" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"25c5832d-778b-4f5d-974d-1be8e7376fdb", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"csi-node-driver-dtvct", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54544286f16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:19.777116 containerd[2108]: 2026-01-23 17:29:19.752 [INFO][5137] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.198/32] ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Namespace="calico-system" Pod="csi-node-driver-dtvct" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" Jan 23 17:29:19.777116 containerd[2108]: 2026-01-23 17:29:19.752 [INFO][5137] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54544286f16 ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Namespace="calico-system" Pod="csi-node-driver-dtvct" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" Jan 23 17:29:19.777116 containerd[2108]: 2026-01-23 17:29:19.756 [INFO][5137] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Namespace="calico-system" Pod="csi-node-driver-dtvct" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" Jan 23 17:29:19.777116 containerd[2108]: 2026-01-23 17:29:19.758 [INFO][5137] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Namespace="calico-system" Pod="csi-node-driver-dtvct" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"25c5832d-778b-4f5d-974d-1be8e7376fdb", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d", Pod:"csi-node-driver-dtvct", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.105.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54544286f16", MAC:"12:83:a5:34:46:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:19.777116 containerd[2108]: 2026-01-23 17:29:19.772 [INFO][5137] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" Namespace="calico-system" Pod="csi-node-driver-dtvct" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-csi--node--driver--dtvct-eth0" Jan 23 17:29:19.786000 audit[5187]: NETFILTER_CFG table=filter:129 family=2 entries=48 op=nft_register_chain pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:19.786000 audit[5187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23124 a0=3 a1=ffffdad86820 a2=0 a3=ffff9ce9afa8 items=0 ppid=4901 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:19.786000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:19.862725 systemd-networkd[1692]: cali357e20852e0: Link UP Jan 23 17:29:19.865913 systemd-networkd[1692]: cali357e20852e0: Gained carrier Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.733 [INFO][5155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0 whisker-7665dd49cb- calico-system 6c109b4b-3504-4b89-94ff-4a8e2ba3506a 904 0 2026-01-23 17:29:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7665dd49cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d whisker-7665dd49cb-d8kt6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali357e20852e0 [] [] }} ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Namespace="calico-system" Pod="whisker-7665dd49cb-d8kt6" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.733 [INFO][5155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Namespace="calico-system" Pod="whisker-7665dd49cb-d8kt6" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.763 [INFO][5172] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" HandleID="k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Workload="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.763 [INFO][5172] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" HandleID="k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Workload="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"whisker-7665dd49cb-d8kt6", "timestamp":"2026-01-23 17:29:19.763046877 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.763 [INFO][5172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.763 [INFO][5172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.763 [INFO][5172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.814 [INFO][5172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.818 [INFO][5172] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.830 [INFO][5172] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.834 [INFO][5172] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.835 [INFO][5172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.837 [INFO][5172] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.838 [INFO][5172] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408 Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.843 [INFO][5172] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.854 [INFO][5172] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.199/26] block=192.168.105.192/26 handle="k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.855 [INFO][5172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.199/26] handle="k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.855 [INFO][5172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:19.885490 containerd[2108]: 2026-01-23 17:29:19.855 [INFO][5172] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.199/26] IPv6=[] ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" HandleID="k8s-pod-network.b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Workload="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" Jan 23 17:29:19.886124 containerd[2108]: 2026-01-23 17:29:19.857 [INFO][5155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Namespace="calico-system" Pod="whisker-7665dd49cb-d8kt6" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0", GenerateName:"whisker-7665dd49cb-", Namespace:"calico-system", SelfLink:"", UID:"6c109b4b-3504-4b89-94ff-4a8e2ba3506a", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7665dd49cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"whisker-7665dd49cb-d8kt6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali357e20852e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:19.886124 containerd[2108]: 2026-01-23 17:29:19.857 [INFO][5155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.199/32] ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Namespace="calico-system" Pod="whisker-7665dd49cb-d8kt6" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" Jan 23 17:29:19.886124 containerd[2108]: 2026-01-23 17:29:19.857 [INFO][5155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali357e20852e0 ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Namespace="calico-system" Pod="whisker-7665dd49cb-d8kt6" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" Jan 23 17:29:19.886124 containerd[2108]: 2026-01-23 17:29:19.865 [INFO][5155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Namespace="calico-system" Pod="whisker-7665dd49cb-d8kt6" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" Jan 23 17:29:19.886124 containerd[2108]: 2026-01-23 17:29:19.867 [INFO][5155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Namespace="calico-system" Pod="whisker-7665dd49cb-d8kt6" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0", GenerateName:"whisker-7665dd49cb-", Namespace:"calico-system", SelfLink:"", UID:"6c109b4b-3504-4b89-94ff-4a8e2ba3506a", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7665dd49cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408", Pod:"whisker-7665dd49cb-d8kt6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.105.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali357e20852e0", MAC:"ca:55:01:8c:5e:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:19.886124 containerd[2108]: 2026-01-23 17:29:19.882 [INFO][5155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" Namespace="calico-system" Pod="whisker-7665dd49cb-d8kt6" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-whisker--7665dd49cb--d8kt6-eth0" Jan 23 17:29:19.906000 audit[5196]: NETFILTER_CFG table=filter:130 family=2 entries=73 op=nft_register_chain pid=5196 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:19.906000 audit[5196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=38808 a0=3 a1=ffffccc68c00 a2=0 a3=ffff9e543fa8 items=0 ppid=4901 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:19.906000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:19.945525 containerd[2108]: time="2026-01-23T17:29:19.945477010Z" level=info msg="connecting to shim 35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9" address="unix:///run/containerd/s/a1a348988860254b618386568b0e2ffbbd806f3d858f505ca457a81021804893" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:19.972285 containerd[2108]: time="2026-01-23T17:29:19.972240608Z" level=info msg="connecting to shim 2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081" address="unix:///run/containerd/s/a0c395c61a304abf6ebe0f96a45da703acc8a5c094e17dbff1b9eeb0506869ed" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:19.978695 containerd[2108]: time="2026-01-23T17:29:19.978662878Z" level=info msg="connecting to shim 7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de" address="unix:///run/containerd/s/49c7f6ecb6534e6324502c82e6ab31f3e819c699731c973739afc7ee6c737206" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:19.988647 containerd[2108]: time="2026-01-23T17:29:19.988453738Z" level=info msg="connecting to shim e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2" address="unix:///run/containerd/s/00a6004e2f212b3d1d40f77d6769c5f012e56ef2bb2adebad941785a8e1824ad" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:19.991493 systemd[1]: Started cri-containerd-35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9.scope - libcontainer container 35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9. Jan 23 17:29:20.024817 containerd[2108]: time="2026-01-23T17:29:20.024317009Z" level=info msg="connecting to shim 57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65" address="unix:///run/containerd/s/677b5da9e00ab4592dd7e36291a1178357b6c1649725ecccd19f2d4445eda973" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:20.026449 systemd[1]: Started cri-containerd-2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081.scope - libcontainer container 2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081. Jan 23 17:29:20.030974 systemd[1]: Started cri-containerd-7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de.scope - libcontainer container 7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de. Jan 23 17:29:20.033000 audit: BPF prog-id=230 op=LOAD Jan 23 17:29:20.033000 audit: BPF prog-id=231 op=LOAD Jan 23 17:29:20.033000 audit[5217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5205 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335616365343566336264393037623638613334373137343931313433 Jan 23 17:29:20.033000 audit: BPF prog-id=231 op=UNLOAD Jan 23 17:29:20.033000 audit[5217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.033000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335616365343566336264393037623638613334373137343931313433 Jan 23 17:29:20.034000 audit: BPF prog-id=232 op=LOAD Jan 23 17:29:20.034000 audit[5217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5205 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335616365343566336264393037623638613334373137343931313433 Jan 23 17:29:20.034000 audit: BPF prog-id=233 op=LOAD Jan 23 17:29:20.034000 audit[5217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5205 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335616365343566336264393037623638613334373137343931313433 Jan 23 17:29:20.034000 audit: BPF prog-id=233 op=UNLOAD Jan 23 17:29:20.034000 audit[5217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335616365343566336264393037623638613334373137343931313433 Jan 23 17:29:20.034000 audit: BPF prog-id=232 op=UNLOAD Jan 23 17:29:20.034000 audit[5217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5205 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335616365343566336264393037623638613334373137343931313433 Jan 23 17:29:20.034000 audit: BPF prog-id=234 op=LOAD Jan 23 17:29:20.034000 audit[5217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5205 pid=5217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335616365343566336264393037623638613334373137343931313433 Jan 23 17:29:20.045453 systemd[1]: Started cri-containerd-e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2.scope - libcontainer container e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2. Jan 23 17:29:20.049000 audit: BPF prog-id=235 op=LOAD Jan 23 17:29:20.050000 audit: BPF prog-id=236 op=LOAD Jan 23 17:29:20.050000 audit[5287]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5233 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263626361343438313235353234313037373861653738623130386165 Jan 23 17:29:20.050000 audit: BPF prog-id=236 op=UNLOAD Jan 23 17:29:20.050000 audit[5287]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5233 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263626361343438313235353234313037373861653738623130386165 Jan 23 17:29:20.050000 audit: BPF prog-id=237 op=LOAD Jan 23 17:29:20.050000 audit[5287]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5233 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263626361343438313235353234313037373861653738623130386165 Jan 23 17:29:20.050000 audit: BPF prog-id=238 op=LOAD Jan 23 17:29:20.050000 audit[5287]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5233 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263626361343438313235353234313037373861653738623130386165 Jan 23 17:29:20.050000 audit: BPF prog-id=238 op=UNLOAD Jan 23 17:29:20.050000 audit[5287]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5233 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263626361343438313235353234313037373861653738623130386165 Jan 23 17:29:20.050000 audit: BPF prog-id=237 op=UNLOAD Jan 23 17:29:20.050000 audit[5287]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5233 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263626361343438313235353234313037373861653738623130386165 Jan 23 17:29:20.050000 audit: BPF prog-id=239 op=LOAD Jan 23 17:29:20.050000 audit[5287]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5233 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263626361343438313235353234313037373861653738623130386165 Jan 23 17:29:20.063838 containerd[2108]: time="2026-01-23T17:29:20.063710136Z" level=info msg="connecting to shim 52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d" address="unix:///run/containerd/s/e5025fbb75eebd925ffd756ba610699d2af9fa4b656745cd4f924791f926dca5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:20.064000 audit: BPF prog-id=240 op=LOAD Jan 23 17:29:20.065000 audit: BPF prog-id=241 op=LOAD Jan 23 17:29:20.065000 audit[5314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5270 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396131656565306430656536663461643364626163653930383634 Jan 23 17:29:20.065000 audit: BPF prog-id=241 op=UNLOAD Jan 23 17:29:20.065000 audit[5314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5270 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396131656565306430656536663461643364626163653930383634 Jan 23 17:29:20.065000 audit: BPF prog-id=242 op=LOAD Jan 23 17:29:20.065000 audit[5314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5270 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396131656565306430656536663461643364626163653930383634 Jan 23 17:29:20.065000 audit: BPF prog-id=243 op=LOAD Jan 23 17:29:20.065000 audit[5314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5270 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396131656565306430656536663461643364626163653930383634 Jan 23 17:29:20.065000 audit: BPF prog-id=243 op=UNLOAD Jan 23 17:29:20.065000 audit[5314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5270 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396131656565306430656536663461643364626163653930383634 Jan 23 17:29:20.065000 audit: BPF prog-id=242 op=UNLOAD Jan 23 17:29:20.065000 audit[5314]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5270 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396131656565306430656536663461643364626163653930383634 Jan 23 17:29:20.065000 audit: BPF prog-id=244 op=LOAD Jan 23 17:29:20.065000 audit[5314]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5270 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.065000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535396131656565306430656536663461643364626163653930383634 Jan 23 17:29:20.089852 systemd[1]: Started cri-containerd-57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65.scope - libcontainer container 57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65. Jan 23 17:29:20.092961 containerd[2108]: time="2026-01-23T17:29:20.092911797Z" level=info msg="connecting to shim b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408" address="unix:///run/containerd/s/97d9862c89bf62dea78dd0355c4a55c3fc54e0a059a82cfff43899a99afa8e02" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:20.118000 audit: BPF prog-id=245 op=LOAD Jan 23 17:29:20.118000 audit: BPF prog-id=246 op=LOAD Jan 23 17:29:20.118000 audit[5283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=5249 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623132333338363866383233623037373462363066393162363964 Jan 23 17:29:20.118000 audit: BPF prog-id=246 op=UNLOAD Jan 23 17:29:20.118000 audit[5283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5249 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623132333338363866383233623037373462363066393162363964 Jan 23 17:29:20.119000 audit: BPF prog-id=247 op=LOAD Jan 23 17:29:20.119000 audit[5283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=5249 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623132333338363866383233623037373462363066393162363964 Jan 23 17:29:20.119000 audit: BPF prog-id=248 op=LOAD Jan 23 17:29:20.119000 audit[5283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=5249 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623132333338363866383233623037373462363066393162363964 Jan 23 17:29:20.119000 audit: BPF prog-id=248 op=UNLOAD Jan 23 17:29:20.119000 audit[5283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5249 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623132333338363866383233623037373462363066393162363964 Jan 23 17:29:20.119000 audit: BPF prog-id=247 op=UNLOAD Jan 23 17:29:20.119000 audit[5283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5249 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623132333338363866383233623037373462363066393162363964 Jan 23 17:29:20.119000 audit: BPF prog-id=249 op=LOAD Jan 23 17:29:20.119000 audit[5283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=5249 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623132333338363866383233623037373462363066393162363964 Jan 23 17:29:20.128469 systemd[1]: Started cri-containerd-52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d.scope - libcontainer container 52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d. Jan 23 17:29:20.148000 audit: BPF prog-id=250 op=LOAD Jan 23 17:29:20.149000 audit: BPF prog-id=251 op=LOAD Jan 23 17:29:20.149000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c180 a2=98 a3=0 items=0 ppid=5334 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616662633535326636376231626138316532656638633438653265 Jan 23 17:29:20.149000 audit: BPF prog-id=251 op=UNLOAD Jan 23 17:29:20.149000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5334 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616662633535326636376231626138316532656638633438653265 Jan 23 17:29:20.150000 audit: BPF prog-id=252 op=LOAD Jan 23 17:29:20.150000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=5334 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616662633535326636376231626138316532656638633438653265 Jan 23 17:29:20.150000 audit: BPF prog-id=253 op=LOAD Jan 23 17:29:20.150000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=5334 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616662633535326636376231626138316532656638633438653265 Jan 23 17:29:20.150000 audit: BPF prog-id=253 op=UNLOAD Jan 23 17:29:20.150000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5334 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616662633535326636376231626138316532656638633438653265 Jan 23 17:29:20.150000 audit: BPF prog-id=252 op=UNLOAD Jan 23 17:29:20.150000 audit[5363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5334 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616662633535326636376231626138316532656638633438653265 Jan 23 17:29:20.150000 audit: BPF prog-id=254 op=LOAD Jan 23 17:29:20.150000 audit[5363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c648 a2=98 a3=0 items=0 ppid=5334 pid=5363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537616662633535326636376231626138316532656638633438653265 Jan 23 17:29:20.155000 audit: BPF prog-id=255 op=LOAD Jan 23 17:29:20.157000 audit: BPF prog-id=256 op=LOAD Jan 23 17:29:20.157000 audit[5410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532643261613631303230666237353463306663383635396135333062 Jan 23 17:29:20.157000 audit: BPF prog-id=256 op=UNLOAD Jan 23 17:29:20.157000 audit[5410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532643261613631303230666237353463306663383635396135333062 Jan 23 17:29:20.157000 audit: BPF prog-id=257 op=LOAD Jan 23 17:29:20.157000 audit[5410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532643261613631303230666237353463306663383635396135333062 Jan 23 17:29:20.157000 audit: BPF prog-id=258 op=LOAD Jan 23 17:29:20.157000 audit[5410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532643261613631303230666237353463306663383635396135333062 Jan 23 17:29:20.158000 audit: BPF prog-id=258 op=UNLOAD Jan 23 17:29:20.158000 audit[5410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532643261613631303230666237353463306663383635396135333062 Jan 23 17:29:20.158000 audit: BPF prog-id=257 op=UNLOAD Jan 23 17:29:20.158000 audit[5410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532643261613631303230666237353463306663383635396135333062 Jan 23 17:29:20.158000 audit: BPF prog-id=259 op=LOAD Jan 23 17:29:20.158000 audit[5410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5387 pid=5410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532643261613631303230666237353463306663383635396135333062 Jan 23 17:29:20.169834 systemd[1]: Started cri-containerd-b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408.scope - libcontainer container b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408. Jan 23 17:29:20.194000 audit: BPF prog-id=260 op=LOAD Jan 23 17:29:20.195000 audit: BPF prog-id=261 op=LOAD Jan 23 17:29:20.195000 audit[5467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5413 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363232646563326266366632653539623630366539386663383931 Jan 23 17:29:20.195000 audit: BPF prog-id=261 op=UNLOAD Jan 23 17:29:20.195000 audit[5467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5413 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363232646563326266366632653539623630366539386663383931 Jan 23 17:29:20.196000 audit: BPF prog-id=262 op=LOAD Jan 23 17:29:20.196000 audit[5467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5413 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363232646563326266366632653539623630366539386663383931 Jan 23 17:29:20.196000 audit: BPF prog-id=263 op=LOAD Jan 23 17:29:20.196000 audit[5467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5413 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363232646563326266366632653539623630366539386663383931 Jan 23 17:29:20.196000 audit: BPF prog-id=263 op=UNLOAD Jan 23 17:29:20.196000 audit[5467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5413 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363232646563326266366632653539623630366539386663383931 Jan 23 17:29:20.196000 audit: BPF prog-id=262 op=UNLOAD Jan 23 17:29:20.196000 audit[5467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5413 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363232646563326266366632653539623630366539386663383931 Jan 23 17:29:20.196000 audit: BPF prog-id=264 op=LOAD Jan 23 17:29:20.196000 audit[5467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5413 pid=5467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236363232646563326266366632653539623630366539386663383931 Jan 23 17:29:20.211599 containerd[2108]: time="2026-01-23T17:29:20.210517778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcbcd85c4-2prtk,Uid:c60749af-cedd-49c6-899a-24ca91720bf5,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cbca44812552410778ae78b108aee44afeb0648c0c824d94ca77d9528424081\"" Jan 23 17:29:20.212863 containerd[2108]: time="2026-01-23T17:29:20.212577965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cc7gn,Uid:0601b217-86e0-4d5d-8bdc-4a1067d58ca6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2\"" Jan 23 17:29:20.220777 containerd[2108]: time="2026-01-23T17:29:20.220732197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:29:20.222693 containerd[2108]: time="2026-01-23T17:29:20.222656265Z" level=info msg="CreateContainer within sandbox \"e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:29:20.224071 containerd[2108]: time="2026-01-23T17:29:20.224033216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wq8fz,Uid:a12545f0-91c5-4708-a845-2a7a18a8616c,Namespace:calico-system,Attempt:0,} returns sandbox id \"35ace45f3bd907b68a34717491143f08e9dca79fd6a9103de32d28feca5eb2a9\"" Jan 23 17:29:20.228766 containerd[2108]: time="2026-01-23T17:29:20.228739109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tmvt,Uid:fc7b3f96-a778-4d13-a8d9-a43b196fdac0,Namespace:kube-system,Attempt:0,} returns sandbox id \"57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65\"" Jan 23 17:29:20.231336 containerd[2108]: time="2026-01-23T17:29:20.231307042Z" level=info msg="CreateContainer within sandbox \"57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:29:20.234527 containerd[2108]: time="2026-01-23T17:29:20.234500592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtvct,Uid:25c5832d-778b-4f5d-974d-1be8e7376fdb,Namespace:calico-system,Attempt:0,} returns sandbox id \"52d2aa61020fb754c0fc8659a530bb2fd20fca903bd4e3328e4573384400b22d\"" Jan 23 17:29:20.240899 containerd[2108]: time="2026-01-23T17:29:20.240855130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-nzs8x,Uid:a129fea3-ad15-412b-9854-c14f30f3a9fd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7cb1233868f823b0774b60f91b69d54620072ff53085e40692b14ada650a88de\"" Jan 23 17:29:20.257149 containerd[2108]: time="2026-01-23T17:29:20.257025330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7665dd49cb-d8kt6,Uid:6c109b4b-3504-4b89-94ff-4a8e2ba3506a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6622dec2bf6f2e59b606e98fc8916998b3345e8c8d89008765125cc5b2e0408\"" Jan 23 17:29:20.301057 containerd[2108]: time="2026-01-23T17:29:20.300253944Z" level=info msg="Container 2e824b9dd1041a464267e4f1786fa9a6a81efec50275fc624ea4793907182ff3: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:29:20.310339 containerd[2108]: time="2026-01-23T17:29:20.310296609Z" level=info msg="Container e6380c60183d6febe8e9f6bb069a0c91354bc8e77c49002cb37d387389301cbf: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:29:20.329544 containerd[2108]: time="2026-01-23T17:29:20.329501231Z" level=info msg="CreateContainer within sandbox \"e59a1eee0d0ee6f4ad3dbace9086411e04e88849f9510a95cb719cc7a4260bf2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e824b9dd1041a464267e4f1786fa9a6a81efec50275fc624ea4793907182ff3\"" Jan 23 17:29:20.330100 containerd[2108]: time="2026-01-23T17:29:20.330061364Z" level=info msg="StartContainer for \"2e824b9dd1041a464267e4f1786fa9a6a81efec50275fc624ea4793907182ff3\"" Jan 23 17:29:20.332108 containerd[2108]: time="2026-01-23T17:29:20.332015946Z" level=info msg="connecting to shim 2e824b9dd1041a464267e4f1786fa9a6a81efec50275fc624ea4793907182ff3" address="unix:///run/containerd/s/00a6004e2f212b3d1d40f77d6769c5f012e56ef2bb2adebad941785a8e1824ad" protocol=ttrpc version=3 Jan 23 17:29:20.348067 containerd[2108]: time="2026-01-23T17:29:20.347973246Z" level=info msg="CreateContainer within sandbox \"57afbc552f67b1ba81e2ef8c48e2eb6f68878cd0c2ca584d3768d81661067d65\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6380c60183d6febe8e9f6bb069a0c91354bc8e77c49002cb37d387389301cbf\"" Jan 23 17:29:20.348516 systemd[1]: Started cri-containerd-2e824b9dd1041a464267e4f1786fa9a6a81efec50275fc624ea4793907182ff3.scope - libcontainer container 2e824b9dd1041a464267e4f1786fa9a6a81efec50275fc624ea4793907182ff3. Jan 23 17:29:20.350143 containerd[2108]: time="2026-01-23T17:29:20.349766564Z" level=info msg="StartContainer for \"e6380c60183d6febe8e9f6bb069a0c91354bc8e77c49002cb37d387389301cbf\"" Jan 23 17:29:20.352839 containerd[2108]: time="2026-01-23T17:29:20.351547576Z" level=info msg="connecting to shim e6380c60183d6febe8e9f6bb069a0c91354bc8e77c49002cb37d387389301cbf" address="unix:///run/containerd/s/677b5da9e00ab4592dd7e36291a1178357b6c1649725ecccd19f2d4445eda973" protocol=ttrpc version=3 Jan 23 17:29:20.364000 audit: BPF prog-id=265 op=LOAD Jan 23 17:29:20.364000 audit: BPF prog-id=266 op=LOAD Jan 23 17:29:20.364000 audit[5516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5270 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383234623964643130343161343634323637653466313738366661 Jan 23 17:29:20.364000 audit: BPF prog-id=266 op=UNLOAD Jan 23 17:29:20.364000 audit[5516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5270 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383234623964643130343161343634323637653466313738366661 Jan 23 17:29:20.364000 audit: BPF prog-id=267 op=LOAD Jan 23 17:29:20.364000 audit[5516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5270 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383234623964643130343161343634323637653466313738366661 Jan 23 17:29:20.364000 audit: BPF prog-id=268 op=LOAD Jan 23 17:29:20.364000 audit[5516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5270 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383234623964643130343161343634323637653466313738366661 Jan 23 17:29:20.364000 audit: BPF prog-id=268 op=UNLOAD Jan 23 17:29:20.364000 audit[5516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5270 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383234623964643130343161343634323637653466313738366661 Jan 23 17:29:20.364000 audit: BPF prog-id=267 op=UNLOAD Jan 23 17:29:20.364000 audit[5516]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5270 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383234623964643130343161343634323637653466313738366661 Jan 23 17:29:20.364000 audit: BPF prog-id=269 op=LOAD Jan 23 17:29:20.364000 audit[5516]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5270 pid=5516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383234623964643130343161343634323637653466313738366661 Jan 23 17:29:20.374527 systemd[1]: Started cri-containerd-e6380c60183d6febe8e9f6bb069a0c91354bc8e77c49002cb37d387389301cbf.scope - libcontainer container e6380c60183d6febe8e9f6bb069a0c91354bc8e77c49002cb37d387389301cbf. Jan 23 17:29:20.390076 containerd[2108]: time="2026-01-23T17:29:20.390037047Z" level=info msg="StartContainer for \"2e824b9dd1041a464267e4f1786fa9a6a81efec50275fc624ea4793907182ff3\" returns successfully" Jan 23 17:29:20.392000 audit: BPF prog-id=270 op=LOAD Jan 23 17:29:20.392000 audit: BPF prog-id=271 op=LOAD Jan 23 17:29:20.392000 audit[5534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5334 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333830633630313833643666656265386539663662623036396130 Jan 23 17:29:20.392000 audit: BPF prog-id=271 op=UNLOAD Jan 23 17:29:20.392000 audit[5534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5334 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333830633630313833643666656265386539663662623036396130 Jan 23 17:29:20.392000 audit: BPF prog-id=272 op=LOAD Jan 23 17:29:20.392000 audit[5534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5334 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333830633630313833643666656265386539663662623036396130 Jan 23 17:29:20.392000 audit: BPF prog-id=273 op=LOAD Jan 23 17:29:20.392000 audit[5534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5334 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333830633630313833643666656265386539663662623036396130 Jan 23 17:29:20.392000 audit: BPF prog-id=273 op=UNLOAD Jan 23 17:29:20.392000 audit[5534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5334 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333830633630313833643666656265386539663662623036396130 Jan 23 17:29:20.393000 audit: BPF prog-id=272 op=UNLOAD Jan 23 17:29:20.393000 audit[5534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5334 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333830633630313833643666656265386539663662623036396130 Jan 23 17:29:20.393000 audit: BPF prog-id=274 op=LOAD Jan 23 17:29:20.393000 audit[5534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5334 pid=5534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333830633630313833643666656265386539663662623036396130 Jan 23 17:29:20.424058 containerd[2108]: time="2026-01-23T17:29:20.424021117Z" level=info msg="StartContainer for \"e6380c60183d6febe8e9f6bb069a0c91354bc8e77c49002cb37d387389301cbf\" returns successfully" Jan 23 17:29:20.541481 containerd[2108]: time="2026-01-23T17:29:20.541428952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:20.546647 containerd[2108]: time="2026-01-23T17:29:20.546599789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:29:20.546869 containerd[2108]: time="2026-01-23T17:29:20.546694786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:20.546914 kubelet[3639]: E0123 17:29:20.546879 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:29:20.548022 kubelet[3639]: E0123 17:29:20.546933 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:29:20.548079 containerd[2108]: time="2026-01-23T17:29:20.547531557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:29:20.553747 kubelet[3639]: E0123 17:29:20.553490 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gvjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fcbcd85c4-2prtk_calico-system(c60749af-cedd-49c6-899a-24ca91720bf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:20.554878 kubelet[3639]: E0123 17:29:20.554820 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:29:20.580372 kubelet[3639]: E0123 17:29:20.580150 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:29:20.612324 kubelet[3639]: I0123 17:29:20.612249 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2tmvt" podStartSLOduration=48.612202637 podStartE2EDuration="48.612202637s" podCreationTimestamp="2026-01-23 17:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:29:20.593203458 +0000 UTC m=+54.310083028" watchObservedRunningTime="2026-01-23 17:29:20.612202637 +0000 UTC m=+54.329082183" Jan 23 17:29:20.614000 audit[5580]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:20.614000 audit[5580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdca3c600 a2=0 a3=1 items=0 ppid=3770 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:20.652000 audit[5580]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:20.652000 audit[5580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdca3c600 a2=0 a3=1 items=0 ppid=3770 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.652000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:20.685000 audit[5582]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:20.685000 audit[5582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc171a020 a2=0 a3=1 items=0 ppid=3770 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:20.689000 audit[5582]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:20.689000 audit[5582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc171a020 a2=0 a3=1 items=0 ppid=3770 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:20.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:20.800004 containerd[2108]: time="2026-01-23T17:29:20.799809015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:20.835789 containerd[2108]: time="2026-01-23T17:29:20.835637340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:29:20.836229 containerd[2108]: time="2026-01-23T17:29:20.835918667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:20.836413 kubelet[3639]: E0123 17:29:20.836354 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:29:20.836469 kubelet[3639]: E0123 17:29:20.836421 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:29:20.836956 kubelet[3639]: E0123 17:29:20.836683 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8jrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wq8fz_calico-system(a12545f0-91c5-4708-a845-2a7a18a8616c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:20.837296 containerd[2108]: time="2026-01-23T17:29:20.837202693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:29:20.837839 kubelet[3639]: E0123 17:29:20.837788 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:29:21.117373 containerd[2108]: time="2026-01-23T17:29:21.117225040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:21.123084 containerd[2108]: time="2026-01-23T17:29:21.122904543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:29:21.123084 containerd[2108]: time="2026-01-23T17:29:21.122957706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:21.123579 kubelet[3639]: E0123 17:29:21.123531 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:29:21.123654 kubelet[3639]: E0123 17:29:21.123593 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:29:21.124284 containerd[2108]: time="2026-01-23T17:29:21.124211755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:29:21.124614 kubelet[3639]: E0123 17:29:21.124338 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:21.390245 containerd[2108]: time="2026-01-23T17:29:21.389970649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:21.395585 containerd[2108]: time="2026-01-23T17:29:21.395467550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:21.395585 containerd[2108]: time="2026-01-23T17:29:21.395498800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:29:21.396339 kubelet[3639]: E0123 17:29:21.396297 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:21.396596 kubelet[3639]: E0123 17:29:21.396457 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:21.396933 containerd[2108]: time="2026-01-23T17:29:21.396910433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:29:21.397439 kubelet[3639]: E0123 17:29:21.397052 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t5xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66f6568cfc-nzs8x_calico-apiserver(a129fea3-ad15-412b-9854-c14f30f3a9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:21.398951 kubelet[3639]: E0123 17:29:21.398921 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:29:21.506460 systemd-networkd[1692]: cali54544286f16: Gained IPv6LL Jan 23 17:29:21.590114 kubelet[3639]: E0123 17:29:21.589956 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:29:21.591291 kubelet[3639]: E0123 17:29:21.590708 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:29:21.592968 kubelet[3639]: E0123 17:29:21.592893 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:29:21.614459 kubelet[3639]: I0123 17:29:21.614398 3639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cc7gn" podStartSLOduration=49.614378514 podStartE2EDuration="49.614378514s" podCreationTimestamp="2026-01-23 17:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:29:20.640921713 +0000 UTC m=+54.357801291" watchObservedRunningTime="2026-01-23 17:29:21.614378514 +0000 UTC m=+55.331258052" Jan 23 17:29:21.634401 systemd-networkd[1692]: cali357e20852e0: Gained IPv6LL Jan 23 17:29:21.642365 containerd[2108]: time="2026-01-23T17:29:21.642011094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:21.644000 audit[5590]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:21.649319 kernel: kauditd_printk_skb: 221 callbacks suppressed Jan 23 17:29:21.650362 kernel: audit: type=1325 audit(1769189361.644:751): table=filter:135 family=2 entries=14 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:21.661765 containerd[2108]: time="2026-01-23T17:29:21.661687716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:29:21.663021 kubelet[3639]: E0123 17:29:21.662415 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:29:21.663021 kubelet[3639]: E0123 17:29:21.662464 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:29:21.663304 containerd[2108]: time="2026-01-23T17:29:21.661725254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:21.644000 audit[5590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff2f92c90 a2=0 a3=1 items=0 ppid=3770 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:21.665214 kubelet[3639]: E0123 17:29:21.663260 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:247d4a4cafdd47dcae21c5927f6ce13b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7665dd49cb-d8kt6_calico-system(6c109b4b-3504-4b89-94ff-4a8e2ba3506a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:21.665329 containerd[2108]: time="2026-01-23T17:29:21.664108961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:29:21.684592 kernel: audit: type=1300 audit(1769189361.644:751): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff2f92c90 a2=0 a3=1 items=0 ppid=3770 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:21.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:21.696212 kernel: audit: type=1327 audit(1769189361.644:751): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:21.685000 audit[5590]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:21.710806 kernel: audit: type=1325 audit(1769189361.685:752): table=nat:136 family=2 entries=20 op=nft_register_rule pid=5590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:21.685000 audit[5590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff2f92c90 a2=0 a3=1 items=0 ppid=3770 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:21.732900 kernel: audit: type=1300 audit(1769189361.685:752): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff2f92c90 a2=0 a3=1 items=0 ppid=3770 pid=5590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:21.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:21.744212 kernel: audit: type=1327 audit(1769189361.685:752): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:21.979778 containerd[2108]: time="2026-01-23T17:29:21.979722093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:21.983517 containerd[2108]: time="2026-01-23T17:29:21.983450727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:21.983840 containerd[2108]: time="2026-01-23T17:29:21.983454111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:29:21.984088 kubelet[3639]: E0123 17:29:21.984037 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:29:21.984289 kubelet[3639]: E0123 17:29:21.984164 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:29:21.984503 kubelet[3639]: E0123 17:29:21.984444 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:21.985555 containerd[2108]: time="2026-01-23T17:29:21.985454871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:29:21.986773 kubelet[3639]: E0123 17:29:21.986669 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:29:22.229760 containerd[2108]: time="2026-01-23T17:29:22.229629067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:22.234565 containerd[2108]: time="2026-01-23T17:29:22.234387219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:29:22.234565 containerd[2108]: time="2026-01-23T17:29:22.234429661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:22.235134 kubelet[3639]: E0123 17:29:22.235032 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:29:22.235134 kubelet[3639]: E0123 17:29:22.235090 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:29:22.235350 kubelet[3639]: E0123 17:29:22.235179 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7665dd49cb-d8kt6_calico-system(6c109b4b-3504-4b89-94ff-4a8e2ba3506a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:22.236384 kubelet[3639]: E0123 17:29:22.236346 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:29:22.588508 kubelet[3639]: E0123 17:29:22.588329 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:29:22.588745 kubelet[3639]: E0123 17:29:22.588675 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:29:22.621000 audit[5592]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:22.621000 audit[5592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffa245990 a2=0 a3=1 items=0 ppid=3770 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:22.652186 kernel: audit: type=1325 audit(1769189362.621:753): table=filter:137 family=2 entries=14 op=nft_register_rule pid=5592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:22.652327 kernel: audit: type=1300 audit(1769189362.621:753): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffa245990 a2=0 a3=1 items=0 ppid=3770 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:22.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:22.661822 kernel: audit: type=1327 audit(1769189362.621:753): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:22.679000 audit[5592]: NETFILTER_CFG table=nat:138 family=2 entries=56 op=nft_register_chain pid=5592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:22.679000 audit[5592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffffa245990 a2=0 a3=1 items=0 ppid=3770 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:22.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:22.692299 kernel: audit: type=1325 audit(1769189362.679:754): table=nat:138 family=2 entries=56 op=nft_register_chain pid=5592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:24.358818 containerd[2108]: time="2026-01-23T17:29:24.358764584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d7f978dc-h5tcw,Uid:9607b09f-7ec4-4ed2-9e57-38044aa1d0d6,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:29:24.476749 systemd-networkd[1692]: cali4dcf558076c: Link UP Jan 23 17:29:24.478794 systemd-networkd[1692]: cali4dcf558076c: Gained carrier Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.414 [INFO][5594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0 calico-apiserver-75d7f978dc- calico-apiserver 9607b09f-7ec4-4ed2-9e57-38044aa1d0d6 817 0 2026-01-23 17:28:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75d7f978dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d calico-apiserver-75d7f978dc-h5tcw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4dcf558076c [] [] }} ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Namespace="calico-apiserver" Pod="calico-apiserver-75d7f978dc-h5tcw" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.414 [INFO][5594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Namespace="calico-apiserver" Pod="calico-apiserver-75d7f978dc-h5tcw" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.435 [INFO][5607] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" HandleID="k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.435 [INFO][5607] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" HandleID="k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"calico-apiserver-75d7f978dc-h5tcw", "timestamp":"2026-01-23 17:29:24.435152064 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.435 [INFO][5607] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.435 [INFO][5607] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.435 [INFO][5607] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.441 [INFO][5607] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.447 [INFO][5607] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.452 [INFO][5607] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.453 [INFO][5607] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.455 [INFO][5607] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.455 [INFO][5607] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.456 [INFO][5607] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0 Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.461 [INFO][5607] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.470 [INFO][5607] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.200/26] block=192.168.105.192/26 handle="k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.470 [INFO][5607] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.200/26] handle="k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.471 [INFO][5607] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:24.494942 containerd[2108]: 2026-01-23 17:29:24.471 [INFO][5607] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.200/26] IPv6=[] ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" HandleID="k8s-pod-network.0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" Jan 23 17:29:24.496436 containerd[2108]: 2026-01-23 17:29:24.472 [INFO][5594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Namespace="calico-apiserver" Pod="calico-apiserver-75d7f978dc-h5tcw" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0", GenerateName:"calico-apiserver-75d7f978dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9607b09f-7ec4-4ed2-9e57-38044aa1d0d6", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d7f978dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"calico-apiserver-75d7f978dc-h5tcw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dcf558076c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:24.496436 containerd[2108]: 2026-01-23 17:29:24.473 [INFO][5594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.200/32] ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Namespace="calico-apiserver" Pod="calico-apiserver-75d7f978dc-h5tcw" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" Jan 23 17:29:24.496436 containerd[2108]: 2026-01-23 17:29:24.473 [INFO][5594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dcf558076c ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Namespace="calico-apiserver" Pod="calico-apiserver-75d7f978dc-h5tcw" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" Jan 23 17:29:24.496436 containerd[2108]: 2026-01-23 17:29:24.479 [INFO][5594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Namespace="calico-apiserver" Pod="calico-apiserver-75d7f978dc-h5tcw" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" Jan 23 17:29:24.496436 containerd[2108]: 2026-01-23 17:29:24.479 [INFO][5594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Namespace="calico-apiserver" Pod="calico-apiserver-75d7f978dc-h5tcw" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0", GenerateName:"calico-apiserver-75d7f978dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"9607b09f-7ec4-4ed2-9e57-38044aa1d0d6", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75d7f978dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0", Pod:"calico-apiserver-75d7f978dc-h5tcw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dcf558076c", MAC:"72:ac:ab:0c:02:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:24.496436 containerd[2108]: 2026-01-23 17:29:24.491 [INFO][5594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" Namespace="calico-apiserver" Pod="calico-apiserver-75d7f978dc-h5tcw" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--75d7f978dc--h5tcw-eth0" Jan 23 17:29:24.508000 audit[5621]: NETFILTER_CFG table=filter:139 family=2 entries=57 op=nft_register_chain pid=5621 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:24.508000 audit[5621]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27812 a0=3 a1=ffffc3266290 a2=0 a3=ffff8d4aafa8 items=0 ppid=4901 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:24.508000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:24.590847 containerd[2108]: time="2026-01-23T17:29:24.590754515Z" level=info msg="connecting to shim 0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0" address="unix:///run/containerd/s/2f4d1deda372b0e23f5c0c88d14d5de344e1a67f767b275e7211b9eefa51f39b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:24.617478 systemd[1]: Started cri-containerd-0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0.scope - libcontainer container 0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0. Jan 23 17:29:24.627000 audit: BPF prog-id=275 op=LOAD Jan 23 17:29:24.627000 audit: BPF prog-id=276 op=LOAD Jan 23 17:29:24.627000 audit[5640]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5629 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:24.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303238343738653237303437326336623032323038656132303664 Jan 23 17:29:24.627000 audit: BPF prog-id=276 op=UNLOAD Jan 23 17:29:24.627000 audit[5640]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5629 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:24.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303238343738653237303437326336623032323038656132303664 Jan 23 17:29:24.627000 audit: BPF prog-id=277 op=LOAD Jan 23 17:29:24.627000 audit[5640]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5629 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:24.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303238343738653237303437326336623032323038656132303664 Jan 23 17:29:24.628000 audit: BPF prog-id=278 op=LOAD Jan 23 17:29:24.628000 audit[5640]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5629 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:24.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303238343738653237303437326336623032323038656132303664 Jan 23 17:29:24.628000 audit: BPF prog-id=278 op=UNLOAD Jan 23 17:29:24.628000 audit[5640]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5629 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:24.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303238343738653237303437326336623032323038656132303664 Jan 23 17:29:24.628000 audit: BPF prog-id=277 op=UNLOAD Jan 23 17:29:24.628000 audit[5640]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5629 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:24.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303238343738653237303437326336623032323038656132303664 Jan 23 17:29:24.628000 audit: BPF prog-id=279 op=LOAD Jan 23 17:29:24.628000 audit[5640]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5629 pid=5640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:24.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063303238343738653237303437326336623032323038656132303664 Jan 23 17:29:24.662868 containerd[2108]: time="2026-01-23T17:29:24.662825675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75d7f978dc-h5tcw,Uid:9607b09f-7ec4-4ed2-9e57-38044aa1d0d6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0c028478e270472c6b02208ea206de92b5325c80f9e8a9b50663a120b81323e0\"" Jan 23 17:29:24.664979 containerd[2108]: time="2026-01-23T17:29:24.664940137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:29:24.968231 containerd[2108]: time="2026-01-23T17:29:24.968027409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:24.975931 containerd[2108]: time="2026-01-23T17:29:24.975819842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:29:24.975931 containerd[2108]: time="2026-01-23T17:29:24.975864413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:24.976084 kubelet[3639]: E0123 17:29:24.976040 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:24.976398 kubelet[3639]: E0123 17:29:24.976086 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:24.976398 kubelet[3639]: E0123 17:29:24.976194 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kd6w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75d7f978dc-h5tcw_calico-apiserver(9607b09f-7ec4-4ed2-9e57-38044aa1d0d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:24.977729 kubelet[3639]: E0123 17:29:24.977665 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:29:25.358132 containerd[2108]: time="2026-01-23T17:29:25.357798567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-b7js4,Uid:d6234171-70b3-48b5-98d5-2c3cd8e41f24,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:29:25.477250 systemd-networkd[1692]: cali99d7ce74e24: Link UP Jan 23 17:29:25.477811 systemd-networkd[1692]: cali99d7ce74e24: Gained carrier Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.409 [INFO][5666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0 calico-apiserver-66f6568cfc- calico-apiserver d6234171-70b3-48b5-98d5-2c3cd8e41f24 808 0 2026-01-23 17:28:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66f6568cfc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-a-f00ee6181d calico-apiserver-66f6568cfc-b7js4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali99d7ce74e24 [] [] }} ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-b7js4" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.410 [INFO][5666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-b7js4" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.431 [INFO][5678] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" HandleID="k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.431 [INFO][5678] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" HandleID="k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000254fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-a-f00ee6181d", "pod":"calico-apiserver-66f6568cfc-b7js4", "timestamp":"2026-01-23 17:29:25.431065581 +0000 UTC"}, Hostname:"ci-4547.1.0-a-f00ee6181d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.431 [INFO][5678] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.431 [INFO][5678] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.431 [INFO][5678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-f00ee6181d' Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.436 [INFO][5678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.440 [INFO][5678] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.446 [INFO][5678] ipam/ipam.go 511: Trying affinity for 192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.448 [INFO][5678] ipam/ipam.go 158: Attempting to load block cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.451 [INFO][5678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.105.192/26 host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.451 [INFO][5678] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.105.192/26 handle="k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.453 [INFO][5678] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.458 [INFO][5678] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.105.192/26 handle="k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.471 [INFO][5678] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.105.201/26] block=192.168.105.192/26 handle="k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.471 [INFO][5678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.105.201/26] handle="k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" host="ci-4547.1.0-a-f00ee6181d" Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.471 [INFO][5678] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:29:25.496835 containerd[2108]: 2026-01-23 17:29:25.471 [INFO][5678] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.105.201/26] IPv6=[] ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" HandleID="k8s-pod-network.217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Workload="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" Jan 23 17:29:25.498790 containerd[2108]: 2026-01-23 17:29:25.473 [INFO][5666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-b7js4" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0", GenerateName:"calico-apiserver-66f6568cfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6234171-70b3-48b5-98d5-2c3cd8e41f24", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66f6568cfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"", Pod:"calico-apiserver-66f6568cfc-b7js4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99d7ce74e24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:25.498790 containerd[2108]: 2026-01-23 17:29:25.473 [INFO][5666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.105.201/32] ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-b7js4" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" Jan 23 17:29:25.498790 containerd[2108]: 2026-01-23 17:29:25.473 [INFO][5666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99d7ce74e24 ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-b7js4" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" Jan 23 17:29:25.498790 containerd[2108]: 2026-01-23 17:29:25.478 [INFO][5666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-b7js4" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" Jan 23 17:29:25.498790 containerd[2108]: 2026-01-23 17:29:25.479 [INFO][5666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-b7js4" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0", GenerateName:"calico-apiserver-66f6568cfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6234171-70b3-48b5-98d5-2c3cd8e41f24", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 28, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66f6568cfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-f00ee6181d", ContainerID:"217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e", Pod:"calico-apiserver-66f6568cfc-b7js4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99d7ce74e24", MAC:"ca:93:44:9e:87:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:29:25.498790 containerd[2108]: 2026-01-23 17:29:25.493 [INFO][5666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" Namespace="calico-apiserver" Pod="calico-apiserver-66f6568cfc-b7js4" WorkloadEndpoint="ci--4547.1.0--a--f00ee6181d-k8s-calico--apiserver--66f6568cfc--b7js4-eth0" Jan 23 17:29:25.510000 audit[5693]: NETFILTER_CFG table=filter:140 family=2 entries=67 op=nft_register_chain pid=5693 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:29:25.510000 audit[5693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31852 a0=3 a1=ffffcb8a3a60 a2=0 a3=ffff9818bfa8 items=0 ppid=4901 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.510000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:29:25.559893 containerd[2108]: time="2026-01-23T17:29:25.559807512Z" level=info msg="connecting to shim 217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e" address="unix:///run/containerd/s/1597540daf2b011962160db6e458f84d0a55cf02e6ac58a45f98e6fda3534357" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:29:25.585480 systemd[1]: Started cri-containerd-217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e.scope - libcontainer container 217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e. Jan 23 17:29:25.594951 kubelet[3639]: E0123 17:29:25.594862 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:29:25.601000 audit: BPF prog-id=280 op=LOAD Jan 23 17:29:25.602000 audit: BPF prog-id=281 op=LOAD Jan 23 17:29:25.602000 audit[5713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5702 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231376339663732643934363265313330663166353162343336323336 Jan 23 17:29:25.602000 audit: BPF prog-id=281 op=UNLOAD Jan 23 17:29:25.602000 audit[5713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5702 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231376339663732643934363265313330663166353162343336323336 Jan 23 17:29:25.602000 audit: BPF prog-id=282 op=LOAD Jan 23 17:29:25.602000 audit[5713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5702 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231376339663732643934363265313330663166353162343336323336 Jan 23 17:29:25.602000 audit: BPF prog-id=283 op=LOAD Jan 23 17:29:25.602000 audit[5713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5702 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231376339663732643934363265313330663166353162343336323336 Jan 23 17:29:25.602000 audit: BPF prog-id=283 op=UNLOAD Jan 23 17:29:25.602000 audit[5713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5702 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231376339663732643934363265313330663166353162343336323336 Jan 23 17:29:25.602000 audit: BPF prog-id=282 op=UNLOAD Jan 23 17:29:25.602000 audit[5713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5702 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231376339663732643934363265313330663166353162343336323336 Jan 23 17:29:25.603000 audit: BPF prog-id=284 op=LOAD Jan 23 17:29:25.603000 audit[5713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5702 pid=5713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231376339663732643934363265313330663166353162343336323336 Jan 23 17:29:25.634000 audit[5738]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5738 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:25.634000 audit[5738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff88b8d50 a2=0 a3=1 items=0 ppid=3770 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.634000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:25.639000 audit[5738]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5738 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:25.639000 audit[5738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff88b8d50 a2=0 a3=1 items=0 ppid=3770 pid=5738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:25.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:25.697089 containerd[2108]: time="2026-01-23T17:29:25.696978131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66f6568cfc-b7js4,Uid:d6234171-70b3-48b5-98d5-2c3cd8e41f24,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"217c9f72d9462e130f1f51b436236ca7bf2227dbb446bb4c16dc6e507340c91e\"" Jan 23 17:29:25.699198 containerd[2108]: time="2026-01-23T17:29:25.699069972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:29:25.997041 containerd[2108]: time="2026-01-23T17:29:25.996818438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:26.002861 containerd[2108]: time="2026-01-23T17:29:26.002751797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:29:26.002861 containerd[2108]: time="2026-01-23T17:29:26.002799216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:26.003024 kubelet[3639]: E0123 17:29:26.002972 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:26.003360 kubelet[3639]: E0123 17:29:26.003020 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:26.003360 kubelet[3639]: E0123 17:29:26.003133 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhrx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66f6568cfc-b7js4_calico-apiserver(d6234171-70b3-48b5-98d5-2c3cd8e41f24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:26.004554 kubelet[3639]: E0123 17:29:26.004518 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:29:26.306435 systemd-networkd[1692]: cali4dcf558076c: Gained IPv6LL Jan 23 17:29:26.597554 kubelet[3639]: E0123 17:29:26.597434 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:29:26.597554 kubelet[3639]: E0123 17:29:26.597255 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:29:26.625000 audit[5748]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:26.625000 audit[5748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd1717410 a2=0 a3=1 items=0 ppid=3770 pid=5748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:26.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:26.631000 audit[5748]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:29:26.631000 audit[5748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd1717410 a2=0 a3=1 items=0 ppid=3770 pid=5748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:29:26.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:29:26.882576 systemd-networkd[1692]: cali99d7ce74e24: Gained IPv6LL Jan 23 17:29:27.599740 kubelet[3639]: E0123 17:29:27.598764 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:29:32.360119 containerd[2108]: time="2026-01-23T17:29:32.359754798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:29:32.623834 containerd[2108]: time="2026-01-23T17:29:32.623703732Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:32.628924 containerd[2108]: time="2026-01-23T17:29:32.628819663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:29:32.628924 containerd[2108]: time="2026-01-23T17:29:32.628872730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:32.629171 kubelet[3639]: E0123 17:29:32.629069 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:29:32.629171 kubelet[3639]: E0123 17:29:32.629115 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:29:32.629903 kubelet[3639]: E0123 17:29:32.629235 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8jrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wq8fz_calico-system(a12545f0-91c5-4708-a845-2a7a18a8616c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:32.631153 kubelet[3639]: E0123 17:29:32.631109 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:29:33.359433 containerd[2108]: time="2026-01-23T17:29:33.359047490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:29:33.611720 containerd[2108]: time="2026-01-23T17:29:33.611586916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:33.616391 containerd[2108]: time="2026-01-23T17:29:33.616344865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:29:33.616505 containerd[2108]: time="2026-01-23T17:29:33.616428334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:33.616685 kubelet[3639]: E0123 17:29:33.616650 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:29:33.616781 kubelet[3639]: E0123 17:29:33.616769 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:29:33.617047 kubelet[3639]: E0123 17:29:33.617012 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:247d4a4cafdd47dcae21c5927f6ce13b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7665dd49cb-d8kt6_calico-system(6c109b4b-3504-4b89-94ff-4a8e2ba3506a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:33.619645 containerd[2108]: time="2026-01-23T17:29:33.619309627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:29:33.987494 containerd[2108]: time="2026-01-23T17:29:33.987439401Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:33.993158 containerd[2108]: time="2026-01-23T17:29:33.993111141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:29:33.993255 containerd[2108]: time="2026-01-23T17:29:33.993203483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:33.993628 kubelet[3639]: E0123 17:29:33.993416 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:29:33.993628 kubelet[3639]: E0123 17:29:33.993478 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:29:33.993628 kubelet[3639]: E0123 17:29:33.993584 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7665dd49cb-d8kt6_calico-system(6c109b4b-3504-4b89-94ff-4a8e2ba3506a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:33.995062 kubelet[3639]: E0123 17:29:33.995012 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:29:35.359297 containerd[2108]: time="2026-01-23T17:29:35.359240306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:29:35.636301 containerd[2108]: time="2026-01-23T17:29:35.635998115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:35.647130 containerd[2108]: time="2026-01-23T17:29:35.646969016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:29:35.647130 containerd[2108]: time="2026-01-23T17:29:35.647073667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:35.647323 kubelet[3639]: E0123 17:29:35.647288 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:29:35.648086 kubelet[3639]: E0123 17:29:35.647340 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:29:35.648086 kubelet[3639]: E0123 17:29:35.647461 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:35.649635 containerd[2108]: time="2026-01-23T17:29:35.649349289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:29:35.913975 containerd[2108]: time="2026-01-23T17:29:35.913841029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:35.921411 containerd[2108]: time="2026-01-23T17:29:35.921352608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:29:35.921564 containerd[2108]: time="2026-01-23T17:29:35.921392073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:35.921771 kubelet[3639]: E0123 17:29:35.921716 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:29:35.921829 kubelet[3639]: E0123 17:29:35.921777 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:29:35.921938 kubelet[3639]: E0123 17:29:35.921881 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:35.923403 kubelet[3639]: E0123 17:29:35.923354 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:29:36.361300 containerd[2108]: time="2026-01-23T17:29:36.361190520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:29:36.601513 containerd[2108]: time="2026-01-23T17:29:36.601441160Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:36.608553 containerd[2108]: time="2026-01-23T17:29:36.608399455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:29:36.608553 containerd[2108]: time="2026-01-23T17:29:36.608501145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:36.608721 kubelet[3639]: E0123 17:29:36.608648 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:36.608721 kubelet[3639]: E0123 17:29:36.608700 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:36.609051 kubelet[3639]: E0123 17:29:36.608891 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t5xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66f6568cfc-nzs8x_calico-apiserver(a129fea3-ad15-412b-9854-c14f30f3a9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:36.610515 containerd[2108]: time="2026-01-23T17:29:36.609450320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:29:36.610597 kubelet[3639]: E0123 17:29:36.610331 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:29:36.880720 containerd[2108]: time="2026-01-23T17:29:36.880507985Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:36.885849 containerd[2108]: time="2026-01-23T17:29:36.885742054Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:29:36.885849 containerd[2108]: time="2026-01-23T17:29:36.885789679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:36.885998 kubelet[3639]: E0123 17:29:36.885954 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:29:36.886333 kubelet[3639]: E0123 17:29:36.885999 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:29:36.886333 kubelet[3639]: E0123 17:29:36.886133 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gvjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fcbcd85c4-2prtk_calico-system(c60749af-cedd-49c6-899a-24ca91720bf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:36.887595 kubelet[3639]: E0123 17:29:36.887550 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:29:40.359743 containerd[2108]: time="2026-01-23T17:29:40.359697028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:29:40.637776 containerd[2108]: time="2026-01-23T17:29:40.637531410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:40.643291 containerd[2108]: time="2026-01-23T17:29:40.643173575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:29:40.643518 containerd[2108]: time="2026-01-23T17:29:40.643484841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:40.643602 kubelet[3639]: E0123 17:29:40.643559 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:40.643875 kubelet[3639]: E0123 17:29:40.643616 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:40.643930 kubelet[3639]: E0123 17:29:40.643874 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kd6w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75d7f978dc-h5tcw_calico-apiserver(9607b09f-7ec4-4ed2-9e57-38044aa1d0d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:40.644620 containerd[2108]: time="2026-01-23T17:29:40.644587864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:29:40.645001 kubelet[3639]: E0123 17:29:40.644973 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:29:41.037382 containerd[2108]: time="2026-01-23T17:29:41.037330100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:41.041684 containerd[2108]: time="2026-01-23T17:29:41.041637963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:29:41.041754 containerd[2108]: time="2026-01-23T17:29:41.041729553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:41.041922 kubelet[3639]: E0123 17:29:41.041884 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:41.041991 kubelet[3639]: E0123 17:29:41.041939 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:29:41.042093 kubelet[3639]: E0123 17:29:41.042057 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhrx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66f6568cfc-b7js4_calico-apiserver(d6234171-70b3-48b5-98d5-2c3cd8e41f24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:41.043944 kubelet[3639]: E0123 17:29:41.043487 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:29:44.361053 kubelet[3639]: E0123 17:29:44.360824 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:29:45.359063 kubelet[3639]: E0123 17:29:45.358985 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:29:46.360664 kubelet[3639]: E0123 17:29:46.360614 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:29:51.359046 kubelet[3639]: E0123 17:29:51.358883 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:29:52.361374 kubelet[3639]: E0123 17:29:52.360969 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:29:52.361374 kubelet[3639]: E0123 17:29:52.361045 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:29:53.358946 kubelet[3639]: E0123 17:29:53.358884 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:29:57.360168 containerd[2108]: time="2026-01-23T17:29:57.360086561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:29:57.647394 containerd[2108]: time="2026-01-23T17:29:57.647234844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:57.652227 containerd[2108]: time="2026-01-23T17:29:57.652170962Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:29:57.652382 containerd[2108]: time="2026-01-23T17:29:57.652283360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:57.652599 kubelet[3639]: E0123 17:29:57.652540 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:29:57.652983 kubelet[3639]: E0123 17:29:57.652603 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:29:57.652983 kubelet[3639]: E0123 17:29:57.652708 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:57.655675 containerd[2108]: time="2026-01-23T17:29:57.655639235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:29:57.906219 containerd[2108]: time="2026-01-23T17:29:57.905725433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:57.913532 containerd[2108]: time="2026-01-23T17:29:57.913365480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:29:57.913532 containerd[2108]: time="2026-01-23T17:29:57.913419291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:57.913884 kubelet[3639]: E0123 17:29:57.913832 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:29:57.913968 kubelet[3639]: E0123 17:29:57.913890 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:29:57.914362 kubelet[3639]: E0123 17:29:57.914003 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:57.915787 kubelet[3639]: E0123 17:29:57.915748 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:29:58.359516 containerd[2108]: time="2026-01-23T17:29:58.359339200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:29:58.619959 containerd[2108]: time="2026-01-23T17:29:58.619699265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:58.628115 containerd[2108]: time="2026-01-23T17:29:58.628052171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:29:58.628296 containerd[2108]: time="2026-01-23T17:29:58.628056780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:58.628369 kubelet[3639]: E0123 17:29:58.628321 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:29:58.628411 kubelet[3639]: E0123 17:29:58.628379 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:29:58.628504 kubelet[3639]: E0123 17:29:58.628469 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:247d4a4cafdd47dcae21c5927f6ce13b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7665dd49cb-d8kt6_calico-system(6c109b4b-3504-4b89-94ff-4a8e2ba3506a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:58.631190 containerd[2108]: time="2026-01-23T17:29:58.631142904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:29:58.903448 containerd[2108]: time="2026-01-23T17:29:58.903234616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:58.908803 containerd[2108]: time="2026-01-23T17:29:58.908638261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:58.908803 containerd[2108]: time="2026-01-23T17:29:58.908637941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:29:58.908987 kubelet[3639]: E0123 17:29:58.908934 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:29:58.909248 kubelet[3639]: E0123 17:29:58.908991 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:29:58.909248 kubelet[3639]: E0123 17:29:58.909090 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7665dd49cb-d8kt6_calico-system(6c109b4b-3504-4b89-94ff-4a8e2ba3506a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:58.910537 kubelet[3639]: E0123 17:29:58.910496 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:29:59.359407 containerd[2108]: time="2026-01-23T17:29:59.359364882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:29:59.620767 containerd[2108]: time="2026-01-23T17:29:59.620349459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:29:59.626018 containerd[2108]: time="2026-01-23T17:29:59.625823514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:29:59.626018 containerd[2108]: time="2026-01-23T17:29:59.625926903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:29:59.626556 kubelet[3639]: E0123 17:29:59.626358 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:29:59.626556 kubelet[3639]: E0123 17:29:59.626416 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:29:59.626736 kubelet[3639]: E0123 17:29:59.626518 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8jrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wq8fz_calico-system(a12545f0-91c5-4708-a845-2a7a18a8616c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:29:59.627958 kubelet[3639]: E0123 17:29:59.627917 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:30:03.359980 containerd[2108]: time="2026-01-23T17:30:03.359905470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:30:03.628481 containerd[2108]: time="2026-01-23T17:30:03.628095881Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:03.635865 containerd[2108]: time="2026-01-23T17:30:03.635801397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:30:03.636028 containerd[2108]: time="2026-01-23T17:30:03.635914658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:03.636201 kubelet[3639]: E0123 17:30:03.636165 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:30:03.636749 kubelet[3639]: E0123 17:30:03.636452 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:30:03.637281 kubelet[3639]: E0123 17:30:03.637065 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gvjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fcbcd85c4-2prtk_calico-system(c60749af-cedd-49c6-899a-24ca91720bf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:03.638405 containerd[2108]: time="2026-01-23T17:30:03.637618404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:30:03.638704 kubelet[3639]: E0123 17:30:03.638473 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:30:03.941383 containerd[2108]: time="2026-01-23T17:30:03.941019882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:03.946773 containerd[2108]: time="2026-01-23T17:30:03.946662402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:30:03.946773 containerd[2108]: time="2026-01-23T17:30:03.946712125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:03.947146 kubelet[3639]: E0123 17:30:03.947075 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:03.947265 kubelet[3639]: E0123 17:30:03.947247 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:03.948114 kubelet[3639]: E0123 17:30:03.948048 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t5xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66f6568cfc-nzs8x_calico-apiserver(a129fea3-ad15-412b-9854-c14f30f3a9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:03.949467 kubelet[3639]: E0123 17:30:03.949421 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:30:04.360239 containerd[2108]: time="2026-01-23T17:30:04.360086809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:30:04.620748 containerd[2108]: time="2026-01-23T17:30:04.620464294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:04.626510 containerd[2108]: time="2026-01-23T17:30:04.626451911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:30:04.626765 containerd[2108]: time="2026-01-23T17:30:04.626485657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:04.626798 kubelet[3639]: E0123 17:30:04.626716 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:04.626798 kubelet[3639]: E0123 17:30:04.626772 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:04.627305 kubelet[3639]: E0123 17:30:04.627243 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kd6w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75d7f978dc-h5tcw_calico-apiserver(9607b09f-7ec4-4ed2-9e57-38044aa1d0d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:04.629308 kubelet[3639]: E0123 17:30:04.629250 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:30:07.359023 containerd[2108]: time="2026-01-23T17:30:07.358806177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:30:07.601283 containerd[2108]: time="2026-01-23T17:30:07.601227877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:07.608414 containerd[2108]: time="2026-01-23T17:30:07.608358910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:30:07.608630 containerd[2108]: time="2026-01-23T17:30:07.608459467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:07.608844 kubelet[3639]: E0123 17:30:07.608708 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:07.609316 kubelet[3639]: E0123 17:30:07.608914 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:07.609808 kubelet[3639]: E0123 17:30:07.609733 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhrx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66f6568cfc-b7js4_calico-apiserver(d6234171-70b3-48b5-98d5-2c3cd8e41f24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:07.611499 kubelet[3639]: E0123 17:30:07.610903 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:30:08.801748 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:37588.service - OpenSSH per-connection server daemon (10.200.16.10:37588). Jan 23 17:30:08.806697 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 23 17:30:08.806910 kernel: audit: type=1130 audit(1769189408.801:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.34:22-10.200.16.10:37588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:08.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.34:22-10.200.16.10:37588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:09.250000 audit[5835]: USER_ACCT pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.268143 sshd[5835]: Accepted publickey for core from 10.200.16.10 port 37588 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:09.267954 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:09.265000 audit[5835]: CRED_ACQ pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.283438 kernel: audit: type=1101 audit(1769189409.250:778): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.283564 kernel: audit: type=1103 audit(1769189409.265:779): pid=5835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.293591 kernel: audit: type=1006 audit(1769189409.265:780): pid=5835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 17:30:09.265000 audit[5835]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd706cf10 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:09.310415 kernel: audit: type=1300 audit(1769189409.265:780): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd706cf10 a2=3 a3=0 items=0 ppid=1 pid=5835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:09.265000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:09.317241 kernel: audit: type=1327 audit(1769189409.265:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:09.321612 systemd-logind[2081]: New session 11 of user core. Jan 23 17:30:09.326503 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:30:09.328000 audit[5835]: USER_START pid=5835 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.349000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.363785 kernel: audit: type=1105 audit(1769189409.328:781): pid=5835 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.363905 kernel: audit: type=1103 audit(1769189409.349:782): pid=5839 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.573398 sshd[5839]: Connection closed by 10.200.16.10 port 37588 Jan 23 17:30:09.573224 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:09.575000 audit[5835]: USER_END pid=5835 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.580900 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:37588.service: Deactivated successfully. Jan 23 17:30:09.584189 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:30:09.575000 audit[5835]: CRED_DISP pid=5835 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.599342 kernel: audit: type=1106 audit(1769189409.575:783): pid=5835 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.599388 kernel: audit: type=1104 audit(1769189409.575:784): pid=5835 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:09.599458 systemd-logind[2081]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:30:09.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.34:22-10.200.16.10:37588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:09.615934 systemd-logind[2081]: Removed session 11. Jan 23 17:30:10.368013 kubelet[3639]: E0123 17:30:10.367207 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:30:12.362004 kubelet[3639]: E0123 17:30:12.361609 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:30:13.360363 kubelet[3639]: E0123 17:30:13.360315 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:30:14.665737 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:30:14.665865 kernel: audit: type=1130 audit(1769189414.660:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.34:22-10.200.16.10:51290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:14.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.34:22-10.200.16.10:51290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:14.662144 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:51290.service - OpenSSH per-connection server daemon (10.200.16.10:51290). Jan 23 17:30:15.110000 audit[5877]: USER_ACCT pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.111830 sshd[5877]: Accepted publickey for core from 10.200.16.10 port 51290 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:15.115092 sshd-session[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:15.113000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.156296 kernel: audit: type=1101 audit(1769189415.110:787): pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.156435 kernel: audit: type=1103 audit(1769189415.113:788): pid=5877 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.140785 systemd-logind[2081]: New session 12 of user core. Jan 23 17:30:15.168211 kernel: audit: type=1006 audit(1769189415.113:789): pid=5877 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 23 17:30:15.160878 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:30:15.113000 audit[5877]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2714150 a2=3 a3=0 items=0 ppid=1 pid=5877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:15.113000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:15.197302 kernel: audit: type=1300 audit(1769189415.113:789): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2714150 a2=3 a3=0 items=0 ppid=1 pid=5877 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:15.197426 kernel: audit: type=1327 audit(1769189415.113:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:15.169000 audit[5877]: USER_START pid=5877 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.218767 kernel: audit: type=1105 audit(1769189415.169:790): pid=5877 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.186000 audit[5881]: CRED_ACQ pid=5881 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.234989 kernel: audit: type=1103 audit(1769189415.186:791): pid=5881 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.394571 sshd[5881]: Connection closed by 10.200.16.10 port 51290 Jan 23 17:30:15.396213 sshd-session[5877]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:15.396000 audit[5877]: USER_END pid=5877 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.396000 audit[5877]: CRED_DISP pid=5877 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.418687 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:51290.service: Deactivated successfully. Jan 23 17:30:15.421074 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:30:15.433463 kernel: audit: type=1106 audit(1769189415.396:792): pid=5877 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.433590 kernel: audit: type=1104 audit(1769189415.396:793): pid=5877 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:15.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.34:22-10.200.16.10:51290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:15.438515 systemd-logind[2081]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:30:15.439551 systemd-logind[2081]: Removed session 12. Jan 23 17:30:16.359358 kubelet[3639]: E0123 17:30:16.359110 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:30:16.360185 kubelet[3639]: E0123 17:30:16.359808 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:30:18.358991 kubelet[3639]: E0123 17:30:18.358709 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:30:18.360274 kubelet[3639]: E0123 17:30:18.360027 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:30:20.485590 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:60344.service - OpenSSH per-connection server daemon (10.200.16.10:60344). Jan 23 17:30:20.505796 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:30:20.505924 kernel: audit: type=1130 audit(1769189420.485:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.34:22-10.200.16.10:60344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:20.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.34:22-10.200.16.10:60344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:20.929000 audit[5897]: USER_ACCT pid=5897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:20.948456 sshd[5897]: Accepted publickey for core from 10.200.16.10 port 60344 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:20.949725 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:20.948000 audit[5897]: CRED_ACQ pid=5897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:20.952345 kernel: audit: type=1101 audit(1769189420.929:796): pid=5897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:20.980676 kernel: audit: type=1103 audit(1769189420.948:797): pid=5897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:20.980805 kernel: audit: type=1006 audit(1769189420.948:798): pid=5897 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 17:30:20.985464 systemd-logind[2081]: New session 13 of user core. Jan 23 17:30:20.948000 audit[5897]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffb4b1c0 a2=3 a3=0 items=0 ppid=1 pid=5897 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:21.006139 kernel: audit: type=1300 audit(1769189420.948:798): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffb4b1c0 a2=3 a3=0 items=0 ppid=1 pid=5897 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:20.948000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:21.012820 kernel: audit: type=1327 audit(1769189420.948:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:21.013150 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:30:21.017000 audit[5897]: USER_START pid=5897 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.040705 kernel: audit: type=1105 audit(1769189421.017:799): pid=5897 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.019000 audit[5901]: CRED_ACQ pid=5901 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.058437 kernel: audit: type=1103 audit(1769189421.019:800): pid=5901 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.237714 sshd[5901]: Connection closed by 10.200.16.10 port 60344 Jan 23 17:30:21.238067 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:21.239000 audit[5897]: USER_END pid=5897 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.260668 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:60344.service: Deactivated successfully. Jan 23 17:30:21.239000 audit[5897]: CRED_DISP pid=5897 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.265109 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:30:21.268445 systemd-logind[2081]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:30:21.271969 systemd-logind[2081]: Removed session 13. Jan 23 17:30:21.277087 kernel: audit: type=1106 audit(1769189421.239:801): pid=5897 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.277166 kernel: audit: type=1104 audit(1769189421.239:802): pid=5897 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.34:22-10.200.16.10:60344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:21.332424 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:60358.service - OpenSSH per-connection server daemon (10.200.16.10:60358). Jan 23 17:30:21.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.34:22-10.200.16.10:60358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:21.359338 kubelet[3639]: E0123 17:30:21.358707 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:30:21.762000 audit[5914]: USER_ACCT pid=5914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.762724 sshd[5914]: Accepted publickey for core from 10.200.16.10 port 60358 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:21.763000 audit[5914]: CRED_ACQ pid=5914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.763000 audit[5914]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd396a710 a2=3 a3=0 items=0 ppid=1 pid=5914 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:21.763000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:21.764732 sshd-session[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:21.768807 systemd-logind[2081]: New session 14 of user core. Jan 23 17:30:21.778495 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:30:21.781000 audit[5914]: USER_START pid=5914 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:21.783000 audit[5918]: CRED_ACQ pid=5918 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.137263 sshd[5918]: Connection closed by 10.200.16.10 port 60358 Jan 23 17:30:22.138459 sshd-session[5914]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:22.139000 audit[5914]: USER_END pid=5914 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.139000 audit[5914]: CRED_DISP pid=5914 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.144471 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:60358.service: Deactivated successfully. Jan 23 17:30:22.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.34:22-10.200.16.10:60358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:22.147122 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:30:22.150475 systemd-logind[2081]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:30:22.153493 systemd-logind[2081]: Removed session 14. Jan 23 17:30:22.245994 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:60366.service - OpenSSH per-connection server daemon (10.200.16.10:60366). Jan 23 17:30:22.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.34:22-10.200.16.10:60366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:22.676000 audit[5928]: USER_ACCT pid=5928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.677788 sshd[5928]: Accepted publickey for core from 10.200.16.10 port 60366 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:22.677000 audit[5928]: CRED_ACQ pid=5928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.678000 audit[5928]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfee2450 a2=3 a3=0 items=0 ppid=1 pid=5928 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:22.678000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:22.680590 sshd-session[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:22.687640 systemd-logind[2081]: New session 15 of user core. Jan 23 17:30:22.691501 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:30:22.696000 audit[5928]: USER_START pid=5928 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.698000 audit[5933]: CRED_ACQ pid=5933 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.963880 sshd[5933]: Connection closed by 10.200.16.10 port 60366 Jan 23 17:30:22.963777 sshd-session[5928]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:22.963000 audit[5928]: USER_END pid=5928 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.963000 audit[5928]: CRED_DISP pid=5928 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:22.968038 systemd-logind[2081]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:30:22.968643 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:60366.service: Deactivated successfully. Jan 23 17:30:22.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.34:22-10.200.16.10:60366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:22.970740 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:30:22.974003 systemd-logind[2081]: Removed session 15. Jan 23 17:30:24.362185 kubelet[3639]: E0123 17:30:24.361883 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:30:25.358830 kubelet[3639]: E0123 17:30:25.358754 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:30:27.358215 kubelet[3639]: E0123 17:30:27.358104 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:30:27.358764 kubelet[3639]: E0123 17:30:27.358679 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:30:28.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.34:22-10.200.16.10:60378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:28.054861 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:60378.service - OpenSSH per-connection server daemon (10.200.16.10:60378). Jan 23 17:30:28.059295 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 17:30:28.059395 kernel: audit: type=1130 audit(1769189428.053:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.34:22-10.200.16.10:60378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:28.506000 audit[5952]: USER_ACCT pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.508143 sshd[5952]: Accepted publickey for core from 10.200.16.10 port 60378 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:28.524000 audit[5952]: CRED_ACQ pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.527122 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:28.540696 kernel: audit: type=1101 audit(1769189428.506:823): pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.540822 kernel: audit: type=1103 audit(1769189428.524:824): pid=5952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.550494 kernel: audit: type=1006 audit(1769189428.524:825): pid=5952 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 23 17:30:28.524000 audit[5952]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd56d5ac0 a2=3 a3=0 items=0 ppid=1 pid=5952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:28.568393 kernel: audit: type=1300 audit(1769189428.524:825): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd56d5ac0 a2=3 a3=0 items=0 ppid=1 pid=5952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:28.524000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:28.573425 systemd-logind[2081]: New session 16 of user core. Jan 23 17:30:28.576881 kernel: audit: type=1327 audit(1769189428.524:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:28.580685 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:30:28.583000 audit[5952]: USER_START pid=5952 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.603000 audit[5956]: CRED_ACQ pid=5956 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.620667 kernel: audit: type=1105 audit(1769189428.583:826): pid=5952 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.620846 kernel: audit: type=1103 audit(1769189428.603:827): pid=5956 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.836157 sshd[5956]: Connection closed by 10.200.16.10 port 60378 Jan 23 17:30:28.836031 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:28.837000 audit[5952]: USER_END pid=5952 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.837000 audit[5952]: CRED_DISP pid=5952 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.877623 kernel: audit: type=1106 audit(1769189428.837:828): pid=5952 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.877700 kernel: audit: type=1104 audit(1769189428.837:829): pid=5952 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:28.865031 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:60378.service: Deactivated successfully. Jan 23 17:30:28.866831 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:30:28.878796 systemd-logind[2081]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:30:28.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.34:22-10.200.16.10:60378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:28.879782 systemd-logind[2081]: Removed session 16. Jan 23 17:30:32.359918 kubelet[3639]: E0123 17:30:32.359096 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:30:32.361113 kubelet[3639]: E0123 17:30:32.361012 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:30:33.359304 kubelet[3639]: E0123 17:30:33.359192 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:30:33.925583 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:56210.service - OpenSSH per-connection server daemon (10.200.16.10:56210). Jan 23 17:30:33.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.34:22-10.200.16.10:56210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:33.928946 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:30:33.929040 kernel: audit: type=1130 audit(1769189433.924:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.34:22-10.200.16.10:56210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:34.368000 audit[5970]: USER_ACCT pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.387653 sshd[5970]: Accepted publickey for core from 10.200.16.10 port 56210 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:34.386232 sshd-session[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:34.384000 audit[5970]: CRED_ACQ pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.405967 kernel: audit: type=1101 audit(1769189434.368:832): pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.406052 kernel: audit: type=1103 audit(1769189434.384:833): pid=5970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.392299 systemd-logind[2081]: New session 17 of user core. Jan 23 17:30:34.414948 kernel: audit: type=1006 audit(1769189434.384:834): pid=5970 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 23 17:30:34.417532 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:30:34.384000 audit[5970]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc5ac9e0 a2=3 a3=0 items=0 ppid=1 pid=5970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:34.437504 kernel: audit: type=1300 audit(1769189434.384:834): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc5ac9e0 a2=3 a3=0 items=0 ppid=1 pid=5970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:34.384000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:34.445484 kernel: audit: type=1327 audit(1769189434.384:834): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:34.424000 audit[5970]: USER_START pid=5970 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.467632 kernel: audit: type=1105 audit(1769189434.424:835): pid=5970 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.427000 audit[5974]: CRED_ACQ pid=5974 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.483359 kernel: audit: type=1103 audit(1769189434.427:836): pid=5974 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.651529 sshd[5974]: Connection closed by 10.200.16.10 port 56210 Jan 23 17:30:34.652523 sshd-session[5970]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:34.653000 audit[5970]: USER_END pid=5970 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.657358 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:56210.service: Deactivated successfully. Jan 23 17:30:34.659244 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:30:34.679517 systemd-logind[2081]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:30:34.653000 audit[5970]: CRED_DISP pid=5970 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.680946 systemd-logind[2081]: Removed session 17. Jan 23 17:30:34.693763 kernel: audit: type=1106 audit(1769189434.653:837): pid=5970 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.693870 kernel: audit: type=1104 audit(1769189434.653:838): pid=5970 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:34.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.34:22-10.200.16.10:56210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:36.359155 kubelet[3639]: E0123 17:30:36.358829 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:30:36.360398 kubelet[3639]: E0123 17:30:36.360305 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:30:39.359119 kubelet[3639]: E0123 17:30:39.358697 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:30:39.739750 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:55914.service - OpenSSH per-connection server daemon (10.200.16.10:55914). Jan 23 17:30:39.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.34:22-10.200.16.10:55914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:39.743275 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:30:39.743356 kernel: audit: type=1130 audit(1769189439.739:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.34:22-10.200.16.10:55914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:40.152864 sshd[5992]: Accepted publickey for core from 10.200.16.10 port 55914 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:40.152000 audit[5992]: USER_ACCT pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.186554 kernel: audit: type=1101 audit(1769189440.152:841): pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.186694 kernel: audit: type=1103 audit(1769189440.171:842): pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.171000 audit[5992]: CRED_ACQ pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.172374 sshd-session[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:40.197142 kernel: audit: type=1006 audit(1769189440.171:843): pid=5992 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 17:30:40.171000 audit[5992]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee0f4780 a2=3 a3=0 items=0 ppid=1 pid=5992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:40.214299 kernel: audit: type=1300 audit(1769189440.171:843): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee0f4780 a2=3 a3=0 items=0 ppid=1 pid=5992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:40.171000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:40.221282 kernel: audit: type=1327 audit(1769189440.171:843): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:40.226890 systemd-logind[2081]: New session 18 of user core. Jan 23 17:30:40.232467 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:30:40.235000 audit[5992]: USER_START pid=5992 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.255000 audit[5996]: CRED_ACQ pid=5996 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.268997 kernel: audit: type=1105 audit(1769189440.235:844): pid=5992 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.269145 kernel: audit: type=1103 audit(1769189440.255:845): pid=5996 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.460939 sshd[5996]: Connection closed by 10.200.16.10 port 55914 Jan 23 17:30:40.461525 sshd-session[5992]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:40.463000 audit[5992]: USER_END pid=5992 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.471838 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:55914.service: Deactivated successfully. Jan 23 17:30:40.474702 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:30:40.475883 systemd-logind[2081]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:30:40.478019 systemd-logind[2081]: Removed session 18. Jan 23 17:30:40.468000 audit[5992]: CRED_DISP pid=5992 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.502473 kernel: audit: type=1106 audit(1769189440.463:846): pid=5992 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.502605 kernel: audit: type=1104 audit(1769189440.468:847): pid=5992 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.34:22-10.200.16.10:55914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:40.549937 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:55926.service - OpenSSH per-connection server daemon (10.200.16.10:55926). Jan 23 17:30:40.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.34:22-10.200.16.10:55926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:40.974000 audit[6007]: USER_ACCT pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.975835 sshd[6007]: Accepted publickey for core from 10.200.16.10 port 55926 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:40.976000 audit[6007]: CRED_ACQ pid=6007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:40.976000 audit[6007]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc27e6290 a2=3 a3=0 items=0 ppid=1 pid=6007 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:40.976000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:40.977227 sshd-session[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:40.981522 systemd-logind[2081]: New session 19 of user core. Jan 23 17:30:41.000510 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 17:30:41.003000 audit[6007]: USER_START pid=6007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:41.004000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:41.349358 sshd[6011]: Connection closed by 10.200.16.10 port 55926 Jan 23 17:30:41.350669 sshd-session[6007]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:41.351000 audit[6007]: USER_END pid=6007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:41.351000 audit[6007]: CRED_DISP pid=6007 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:41.354961 systemd-logind[2081]: Session 19 logged out. Waiting for processes to exit. Jan 23 17:30:41.355485 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:55926.service: Deactivated successfully. Jan 23 17:30:41.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.34:22-10.200.16.10:55926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:41.359298 kubelet[3639]: E0123 17:30:41.359179 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:30:41.359499 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 17:30:41.363283 systemd-logind[2081]: Removed session 19. Jan 23 17:30:41.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.34:22-10.200.16.10:55930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:41.437326 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:55930.service - OpenSSH per-connection server daemon (10.200.16.10:55930). Jan 23 17:30:41.859000 audit[6021]: USER_ACCT pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:41.859969 sshd[6021]: Accepted publickey for core from 10.200.16.10 port 55930 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:41.861000 audit[6021]: CRED_ACQ pid=6021 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:41.861000 audit[6021]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea6ed1e0 a2=3 a3=0 items=0 ppid=1 pid=6021 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:41.861000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:41.862624 sshd-session[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:41.870371 systemd-logind[2081]: New session 20 of user core. Jan 23 17:30:41.877132 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 17:30:41.882000 audit[6021]: USER_START pid=6021 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:41.884000 audit[6050]: CRED_ACQ pid=6050 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:42.494000 audit[6068]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=6068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:30:42.494000 audit[6068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd0bd1660 a2=0 a3=1 items=0 ppid=3770 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:42.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:30:42.499000 audit[6068]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=6068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:30:42.499000 audit[6068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd0bd1660 a2=0 a3=1 items=0 ppid=3770 pid=6068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:42.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:30:42.510000 audit[6070]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=6070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:30:42.510000 audit[6070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd8c7dd80 a2=0 a3=1 items=0 ppid=3770 pid=6070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:42.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:30:42.514000 audit[6070]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:30:42.514000 audit[6070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd8c7dd80 a2=0 a3=1 items=0 ppid=3770 pid=6070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:42.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:30:42.594736 sshd[6050]: Connection closed by 10.200.16.10 port 55930 Jan 23 17:30:42.594581 sshd-session[6021]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:42.598000 audit[6021]: USER_END pid=6021 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:42.599000 audit[6021]: CRED_DISP pid=6021 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:42.602663 systemd-logind[2081]: Session 20 logged out. Waiting for processes to exit. Jan 23 17:30:42.603313 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:55930.service: Deactivated successfully. Jan 23 17:30:42.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.34:22-10.200.16.10:55930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:42.608242 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 17:30:42.611638 systemd-logind[2081]: Removed session 20. Jan 23 17:30:42.677623 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:55936.service - OpenSSH per-connection server daemon (10.200.16.10:55936). Jan 23 17:30:42.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.34:22-10.200.16.10:55936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.080000 audit[6075]: USER_ACCT pid=6075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.081814 sshd[6075]: Accepted publickey for core from 10.200.16.10 port 55936 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:43.083000 audit[6075]: CRED_ACQ pid=6075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.083000 audit[6075]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd68f8260 a2=3 a3=0 items=0 ppid=1 pid=6075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:43.083000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:43.084890 sshd-session[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:43.092986 systemd-logind[2081]: New session 21 of user core. Jan 23 17:30:43.097883 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 17:30:43.102000 audit[6075]: USER_START pid=6075 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.105000 audit[6079]: CRED_ACQ pid=6079 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.441034 sshd[6079]: Connection closed by 10.200.16.10 port 55936 Jan 23 17:30:43.443474 sshd-session[6075]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:43.444000 audit[6075]: USER_END pid=6075 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.445000 audit[6075]: CRED_DISP pid=6075 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.448242 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:55936.service: Deactivated successfully. Jan 23 17:30:43.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.34:22-10.200.16.10:55936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.452065 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 17:30:43.454680 systemd-logind[2081]: Session 21 logged out. Waiting for processes to exit. Jan 23 17:30:43.455713 systemd-logind[2081]: Removed session 21. Jan 23 17:30:43.530964 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:55950.service - OpenSSH per-connection server daemon (10.200.16.10:55950). Jan 23 17:30:43.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.34:22-10.200.16.10:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.960000 audit[6089]: USER_ACCT pid=6089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.961004 sshd[6089]: Accepted publickey for core from 10.200.16.10 port 55950 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:43.961000 audit[6089]: CRED_ACQ pid=6089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.961000 audit[6089]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe23d1310 a2=3 a3=0 items=0 ppid=1 pid=6089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:43.961000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:43.962853 sshd-session[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:43.968355 systemd-logind[2081]: New session 22 of user core. Jan 23 17:30:43.972534 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 17:30:43.975000 audit[6089]: USER_START pid=6089 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:43.976000 audit[6093]: CRED_ACQ pid=6093 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:44.245121 sshd[6093]: Connection closed by 10.200.16.10 port 55950 Jan 23 17:30:44.246175 sshd-session[6089]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:44.247000 audit[6089]: USER_END pid=6089 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:44.247000 audit[6089]: CRED_DISP pid=6089 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:44.250517 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:55950.service: Deactivated successfully. Jan 23 17:30:44.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.34:22-10.200.16.10:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.252908 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 17:30:44.254725 systemd-logind[2081]: Session 22 logged out. Waiting for processes to exit. Jan 23 17:30:44.255617 systemd-logind[2081]: Removed session 22. Jan 23 17:30:44.360313 containerd[2108]: time="2026-01-23T17:30:44.360192870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:30:44.660594 containerd[2108]: time="2026-01-23T17:30:44.658006687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:44.665505 containerd[2108]: time="2026-01-23T17:30:44.665390746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:30:44.665505 containerd[2108]: time="2026-01-23T17:30:44.665440156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:44.665856 kubelet[3639]: E0123 17:30:44.665822 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:44.666602 kubelet[3639]: E0123 17:30:44.666188 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:44.666602 kubelet[3639]: E0123 17:30:44.666418 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t5xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66f6568cfc-nzs8x_calico-apiserver(a129fea3-ad15-412b-9854-c14f30f3a9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:44.667035 containerd[2108]: time="2026-01-23T17:30:44.666922995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:30:44.668540 kubelet[3639]: E0123 17:30:44.668498 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:30:44.965512 containerd[2108]: time="2026-01-23T17:30:44.965455679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:44.974652 containerd[2108]: time="2026-01-23T17:30:44.974583437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:30:44.974795 containerd[2108]: time="2026-01-23T17:30:44.974696530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:44.974914 kubelet[3639]: E0123 17:30:44.974872 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:30:44.974960 kubelet[3639]: E0123 17:30:44.974925 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:30:44.975041 kubelet[3639]: E0123 17:30:44.975014 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:247d4a4cafdd47dcae21c5927f6ce13b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7665dd49cb-d8kt6_calico-system(6c109b4b-3504-4b89-94ff-4a8e2ba3506a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:44.977257 containerd[2108]: time="2026-01-23T17:30:44.977225452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:30:45.267754 containerd[2108]: time="2026-01-23T17:30:45.267628202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:45.272478 containerd[2108]: time="2026-01-23T17:30:45.272414648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:30:45.272619 containerd[2108]: time="2026-01-23T17:30:45.272518373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:45.272879 kubelet[3639]: E0123 17:30:45.272771 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:30:45.274298 kubelet[3639]: E0123 17:30:45.273248 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:30:45.274298 kubelet[3639]: E0123 17:30:45.273370 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxcwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7665dd49cb-d8kt6_calico-system(6c109b4b-3504-4b89-94ff-4a8e2ba3506a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:45.274793 kubelet[3639]: E0123 17:30:45.274761 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:30:47.252000 audit[6106]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:30:47.257095 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 23 17:30:47.257379 kernel: audit: type=1325 audit(1769189447.252:889): table=filter:149 family=2 entries=26 op=nft_register_rule pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:30:47.252000 audit[6106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd6e6c2c0 a2=0 a3=1 items=0 ppid=3770 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:47.284192 kernel: audit: type=1300 audit(1769189447.252:889): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd6e6c2c0 a2=0 a3=1 items=0 ppid=3770 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:47.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:30:47.296441 kernel: audit: type=1327 audit(1769189447.252:889): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:30:47.288000 audit[6106]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:30:47.306203 kernel: audit: type=1325 audit(1769189447.288:890): table=nat:150 family=2 entries=104 op=nft_register_chain pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:30:47.288000 audit[6106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd6e6c2c0 a2=0 a3=1 items=0 ppid=3770 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:47.324352 kernel: audit: type=1300 audit(1769189447.288:890): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd6e6c2c0 a2=0 a3=1 items=0 ppid=3770 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:47.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:30:47.337333 kernel: audit: type=1327 audit(1769189447.288:890): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:30:47.358826 kubelet[3639]: E0123 17:30:47.358781 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:30:48.363526 containerd[2108]: time="2026-01-23T17:30:48.361959674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:30:48.617297 containerd[2108]: time="2026-01-23T17:30:48.616989772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:48.622065 containerd[2108]: time="2026-01-23T17:30:48.622011813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:30:48.622482 containerd[2108]: time="2026-01-23T17:30:48.622102257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:48.622657 kubelet[3639]: E0123 17:30:48.622462 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:30:48.622657 kubelet[3639]: E0123 17:30:48.622513 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:30:48.622657 kubelet[3639]: E0123 17:30:48.622616 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:48.624827 containerd[2108]: time="2026-01-23T17:30:48.624768333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:30:48.887799 containerd[2108]: time="2026-01-23T17:30:48.887672228Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:48.893823 containerd[2108]: time="2026-01-23T17:30:48.893683017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:30:48.893823 containerd[2108]: time="2026-01-23T17:30:48.893729187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:48.893996 kubelet[3639]: E0123 17:30:48.893933 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:30:48.893996 kubelet[3639]: E0123 17:30:48.893982 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:30:48.894134 kubelet[3639]: E0123 17:30:48.894087 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9pgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dtvct_calico-system(25c5832d-778b-4f5d-974d-1be8e7376fdb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:48.895456 kubelet[3639]: E0123 17:30:48.895396 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:30:49.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.34:22-10.200.16.10:55964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:49.335481 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:55964.service - OpenSSH per-connection server daemon (10.200.16.10:55964). Jan 23 17:30:49.351298 kernel: audit: type=1130 audit(1769189449.334:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.34:22-10.200.16.10:55964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:49.779536 sshd[6108]: Accepted publickey for core from 10.200.16.10 port 55964 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:49.778000 audit[6108]: USER_ACCT pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:49.799342 sshd-session[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:49.797000 audit[6108]: CRED_ACQ pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:49.820964 kernel: audit: type=1101 audit(1769189449.778:892): pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:49.821105 kernel: audit: type=1103 audit(1769189449.797:893): pid=6108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:49.822664 systemd-logind[2081]: New session 23 of user core. Jan 23 17:30:49.823492 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 17:30:49.837433 kernel: audit: type=1006 audit(1769189449.797:894): pid=6108 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 23 17:30:49.797000 audit[6108]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeb12fea0 a2=3 a3=0 items=0 ppid=1 pid=6108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:49.797000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:49.837000 audit[6108]: USER_START pid=6108 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:49.839000 audit[6112]: CRED_ACQ pid=6112 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:50.085416 sshd[6112]: Connection closed by 10.200.16.10 port 55964 Jan 23 17:30:50.085906 sshd-session[6108]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:50.086000 audit[6108]: USER_END pid=6108 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:50.087000 audit[6108]: CRED_DISP pid=6108 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:50.092264 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:55964.service: Deactivated successfully. Jan 23 17:30:50.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.34:22-10.200.16.10:55964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:50.095427 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 17:30:50.097686 systemd-logind[2081]: Session 23 logged out. Waiting for processes to exit. Jan 23 17:30:50.099498 systemd-logind[2081]: Removed session 23. Jan 23 17:30:50.361683 containerd[2108]: time="2026-01-23T17:30:50.361306964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:30:50.627614 containerd[2108]: time="2026-01-23T17:30:50.627472241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:50.632092 containerd[2108]: time="2026-01-23T17:30:50.632032191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:30:50.632240 containerd[2108]: time="2026-01-23T17:30:50.632138619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:50.632427 kubelet[3639]: E0123 17:30:50.632390 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:30:50.632914 kubelet[3639]: E0123 17:30:50.632731 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:30:50.633068 kubelet[3639]: E0123 17:30:50.632919 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gvjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fcbcd85c4-2prtk_calico-system(c60749af-cedd-49c6-899a-24ca91720bf5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:50.633524 containerd[2108]: time="2026-01-23T17:30:50.633477069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:30:50.634538 kubelet[3639]: E0123 17:30:50.634489 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:30:50.908377 containerd[2108]: time="2026-01-23T17:30:50.908192380Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:50.914367 containerd[2108]: time="2026-01-23T17:30:50.914311526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:30:50.914708 containerd[2108]: time="2026-01-23T17:30:50.914514519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:50.915063 kubelet[3639]: E0123 17:30:50.914852 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:30:50.915063 kubelet[3639]: E0123 17:30:50.914905 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:30:50.915063 kubelet[3639]: E0123 17:30:50.915026 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8jrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wq8fz_calico-system(a12545f0-91c5-4708-a845-2a7a18a8616c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:50.916439 kubelet[3639]: E0123 17:30:50.916348 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:30:55.175539 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:45526.service - OpenSSH per-connection server daemon (10.200.16.10:45526). Jan 23 17:30:55.179549 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 17:30:55.180163 kernel: audit: type=1130 audit(1769189455.174:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.34:22-10.200.16.10:45526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.34:22-10.200.16.10:45526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.358715 kubelet[3639]: E0123 17:30:55.358647 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:30:55.612000 audit[6146]: USER_ACCT pid=6146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.614458 sshd[6146]: Accepted publickey for core from 10.200.16.10 port 45526 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:30:55.630000 audit[6146]: CRED_ACQ pid=6146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.632202 sshd-session[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:30:55.647518 kernel: audit: type=1101 audit(1769189455.612:901): pid=6146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.647665 kernel: audit: type=1103 audit(1769189455.630:902): pid=6146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.659229 kernel: audit: type=1006 audit(1769189455.630:903): pid=6146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 23 17:30:55.630000 audit[6146]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8734080 a2=3 a3=0 items=0 ppid=1 pid=6146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:55.630000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:55.685803 systemd-logind[2081]: New session 24 of user core. Jan 23 17:30:55.688281 kernel: audit: type=1300 audit(1769189455.630:903): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8734080 a2=3 a3=0 items=0 ppid=1 pid=6146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:30:55.688357 kernel: audit: type=1327 audit(1769189455.630:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:30:55.692047 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 17:30:55.695000 audit[6146]: USER_START pid=6146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.714000 audit[6150]: CRED_ACQ pid=6150 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.774217 kernel: audit: type=1105 audit(1769189455.695:904): pid=6146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.774384 kernel: audit: type=1103 audit(1769189455.714:905): pid=6150 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.924370 sshd[6150]: Connection closed by 10.200.16.10 port 45526 Jan 23 17:30:55.924493 sshd-session[6146]: pam_unix(sshd:session): session closed for user core Jan 23 17:30:55.924000 audit[6146]: USER_END pid=6146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.924000 audit[6146]: CRED_DISP pid=6146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.951308 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:45526.service: Deactivated successfully. Jan 23 17:30:55.955255 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 17:30:55.965023 kernel: audit: type=1106 audit(1769189455.924:906): pid=6146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.965224 kernel: audit: type=1104 audit(1769189455.924:907): pid=6146 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:30:55.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.34:22-10.200.16.10:45526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.967654 systemd-logind[2081]: Session 24 logged out. Waiting for processes to exit. Jan 23 17:30:55.969535 systemd-logind[2081]: Removed session 24. Jan 23 17:30:56.359845 kubelet[3639]: E0123 17:30:56.359761 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:30:56.361167 containerd[2108]: time="2026-01-23T17:30:56.360446481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:30:56.630307 containerd[2108]: time="2026-01-23T17:30:56.630014939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:30:56.633927 containerd[2108]: time="2026-01-23T17:30:56.633873238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:30:56.634155 containerd[2108]: time="2026-01-23T17:30:56.633945234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:30:56.634463 kubelet[3639]: E0123 17:30:56.634412 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:56.634670 kubelet[3639]: E0123 17:30:56.634557 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:30:56.634870 kubelet[3639]: E0123 17:30:56.634821 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kd6w2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-75d7f978dc-h5tcw_calico-apiserver(9607b09f-7ec4-4ed2-9e57-38044aa1d0d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:30:56.636398 kubelet[3639]: E0123 17:30:56.636350 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:30:58.540315 waagent[2343]: 2026-01-23T17:30:58.540205Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 17:30:58.549989 waagent[2343]: 2026-01-23T17:30:58.549938Z INFO ExtHandler Jan 23 17:30:58.550114 waagent[2343]: 2026-01-23T17:30:58.550051Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 86013ae6-2304-4a71-b70b-5f7d656b178c eTag: 5367850821843483301 source: Fabric] Jan 23 17:30:58.550431 waagent[2343]: 2026-01-23T17:30:58.550398Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 17:30:58.550939 waagent[2343]: 2026-01-23T17:30:58.550901Z INFO ExtHandler Jan 23 17:30:58.550978 waagent[2343]: 2026-01-23T17:30:58.550964Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 17:30:58.617375 waagent[2343]: 2026-01-23T17:30:58.617315Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 17:30:58.678937 waagent[2343]: 2026-01-23T17:30:58.678864Z INFO ExtHandler Downloaded certificate {'thumbprint': '11F9DD3EF9BAEADA3368A08709284D9B97F53B05', 'hasPrivateKey': True} Jan 23 17:30:58.679405 waagent[2343]: 2026-01-23T17:30:58.679369Z INFO ExtHandler Fetch goal state completed Jan 23 17:30:58.679705 waagent[2343]: 2026-01-23T17:30:58.679676Z INFO ExtHandler ExtHandler Jan 23 17:30:58.679751 waagent[2343]: 2026-01-23T17:30:58.679732Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: e0b7db7e-d44c-4819-aabc-573170ba23db correlation 6e77e131-dbdf-45e4-89d6-9a14c25eba16 created: 2026-01-23T17:30:51.463692Z] Jan 23 17:30:58.680047 waagent[2343]: 2026-01-23T17:30:58.680012Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 17:30:58.680431 waagent[2343]: 2026-01-23T17:30:58.680406Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 17:31:01.011224 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:57942.service - OpenSSH per-connection server daemon (10.200.16.10:57942). Jan 23 17:31:01.018000 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:31:01.018075 kernel: audit: type=1130 audit(1769189461.011:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.34:22-10.200.16.10:57942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.34:22-10.200.16.10:57942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.446000 audit[6167]: USER_ACCT pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.464960 sshd[6167]: Accepted publickey for core from 10.200.16.10 port 57942 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:01.464771 sshd-session[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:01.463000 audit[6167]: CRED_ACQ pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.466422 kernel: audit: type=1101 audit(1769189461.446:910): pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.494650 kernel: audit: type=1103 audit(1769189461.463:911): pid=6167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.494771 kernel: audit: type=1006 audit(1769189461.463:912): pid=6167 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 23 17:31:01.463000 audit[6167]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb889cd0 a2=3 a3=0 items=0 ppid=1 pid=6167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:01.513979 kernel: audit: type=1300 audit(1769189461.463:912): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb889cd0 a2=3 a3=0 items=0 ppid=1 pid=6167 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:01.513402 systemd-logind[2081]: New session 25 of user core. Jan 23 17:31:01.463000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:31:01.520643 kernel: audit: type=1327 audit(1769189461.463:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:31:01.526542 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 17:31:01.529000 audit[6167]: USER_START pid=6167 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.531000 audit[6171]: CRED_ACQ pid=6171 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.573569 kernel: audit: type=1105 audit(1769189461.529:913): pid=6167 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.573703 kernel: audit: type=1103 audit(1769189461.531:914): pid=6171 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.754621 sshd[6171]: Connection closed by 10.200.16.10 port 57942 Jan 23 17:31:01.755459 sshd-session[6167]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:01.756000 audit[6167]: USER_END pid=6167 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.759666 systemd-logind[2081]: Session 25 logged out. Waiting for processes to exit. Jan 23 17:31:01.761213 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:57942.service: Deactivated successfully. Jan 23 17:31:01.764097 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 17:31:01.767479 systemd-logind[2081]: Removed session 25. Jan 23 17:31:01.756000 audit[6167]: CRED_DISP pid=6167 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.793907 kernel: audit: type=1106 audit(1769189461.756:915): pid=6167 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.794036 kernel: audit: type=1104 audit(1769189461.756:916): pid=6167 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:01.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.34:22-10.200.16.10:57942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.363325 containerd[2108]: time="2026-01-23T17:31:02.363056289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:31:02.662572 containerd[2108]: time="2026-01-23T17:31:02.662357862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:31:02.668191 containerd[2108]: time="2026-01-23T17:31:02.668144337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:31:02.668479 containerd[2108]: time="2026-01-23T17:31:02.668358281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:31:02.668630 kubelet[3639]: E0123 17:31:02.668577 3639 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:31:02.668924 kubelet[3639]: E0123 17:31:02.668637 3639 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:31:02.668924 kubelet[3639]: E0123 17:31:02.668738 3639 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhrx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66f6568cfc-b7js4_calico-apiserver(d6234171-70b3-48b5-98d5-2c3cd8e41f24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:31:02.670227 kubelet[3639]: E0123 17:31:02.670190 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:31:03.359115 kubelet[3639]: E0123 17:31:03.358827 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:31:03.359497 kubelet[3639]: E0123 17:31:03.359425 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb" Jan 23 17:31:05.358693 kubelet[3639]: E0123 17:31:05.358655 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcbcd85c4-2prtk" podUID="c60749af-cedd-49c6-899a-24ca91720bf5" Jan 23 17:31:06.360251 kubelet[3639]: E0123 17:31:06.360108 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-nzs8x" podUID="a129fea3-ad15-412b-9854-c14f30f3a9fd" Jan 23 17:31:06.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.34:22-10.200.16.10:57958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:06.846412 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:57958.service - OpenSSH per-connection server daemon (10.200.16.10:57958). Jan 23 17:31:06.850355 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:31:06.850448 kernel: audit: type=1130 audit(1769189466.846:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.34:22-10.200.16.10:57958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.291000 audit[6185]: USER_ACCT pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.310516 sshd[6185]: Accepted publickey for core from 10.200.16.10 port 57958 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:07.311779 sshd-session[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:07.310000 audit[6185]: CRED_ACQ pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.330681 kernel: audit: type=1101 audit(1769189467.291:919): pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.330812 kernel: audit: type=1103 audit(1769189467.310:920): pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.342173 kernel: audit: type=1006 audit(1769189467.310:921): pid=6185 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 23 17:31:07.310000 audit[6185]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe91981e0 a2=3 a3=0 items=0 ppid=1 pid=6185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:07.347578 systemd-logind[2081]: New session 26 of user core. Jan 23 17:31:07.371308 kubelet[3639]: E0123 17:31:07.369034 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-75d7f978dc-h5tcw" podUID="9607b09f-7ec4-4ed2-9e57-38044aa1d0d6" Jan 23 17:31:07.371929 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 17:31:07.310000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:31:07.381047 kernel: audit: type=1300 audit(1769189467.310:921): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe91981e0 a2=3 a3=0 items=0 ppid=1 pid=6185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:07.381149 kernel: audit: type=1327 audit(1769189467.310:921): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:31:07.384000 audit[6185]: USER_START pid=6185 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.411470 kernel: audit: type=1105 audit(1769189467.384:922): pid=6185 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.412000 audit[6189]: CRED_ACQ pid=6189 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.441297 kernel: audit: type=1103 audit(1769189467.412:923): pid=6189 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.625312 sshd[6189]: Connection closed by 10.200.16.10 port 57958 Jan 23 17:31:07.624712 sshd-session[6185]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:07.627000 audit[6185]: USER_END pid=6185 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.648939 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:57958.service: Deactivated successfully. Jan 23 17:31:07.650909 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 17:31:07.627000 audit[6185]: CRED_DISP pid=6185 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.668856 kernel: audit: type=1106 audit(1769189467.627:924): pid=6185 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.668991 kernel: audit: type=1104 audit(1769189467.627:925): pid=6185 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:07.669189 systemd-logind[2081]: Session 26 logged out. Waiting for processes to exit. Jan 23 17:31:07.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.34:22-10.200.16.10:57958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:07.672309 systemd-logind[2081]: Removed session 26. Jan 23 17:31:10.360688 kubelet[3639]: E0123 17:31:10.360621 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7665dd49cb-d8kt6" podUID="6c109b4b-3504-4b89-94ff-4a8e2ba3506a" Jan 23 17:31:12.724202 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:31:12.724377 kernel: audit: type=1130 audit(1769189472.711:927): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.34:22-10.200.16.10:59100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:12.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.34:22-10.200.16.10:59100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:12.712582 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:59100.service - OpenSSH per-connection server daemon (10.200.16.10:59100). Jan 23 17:31:13.153000 audit[6223]: USER_ACCT pid=6223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.156821 sshd[6223]: Accepted publickey for core from 10.200.16.10 port 59100 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:13.172114 sshd-session[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:13.169000 audit[6223]: CRED_ACQ pid=6223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.187209 kernel: audit: type=1101 audit(1769189473.153:928): pid=6223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.187352 kernel: audit: type=1103 audit(1769189473.169:929): pid=6223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.197612 kernel: audit: type=1006 audit(1769189473.169:930): pid=6223 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 23 17:31:13.198447 kernel: audit: type=1300 audit(1769189473.169:930): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5dabb80 a2=3 a3=0 items=0 ppid=1 pid=6223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:13.169000 audit[6223]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5dabb80 a2=3 a3=0 items=0 ppid=1 pid=6223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:13.169000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:31:13.222177 kernel: audit: type=1327 audit(1769189473.169:930): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:31:13.226941 systemd-logind[2081]: New session 27 of user core. Jan 23 17:31:13.229450 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 17:31:13.232000 audit[6223]: USER_START pid=6223 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.252000 audit[6227]: CRED_ACQ pid=6227 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.268031 kernel: audit: type=1105 audit(1769189473.232:931): pid=6223 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.268168 kernel: audit: type=1103 audit(1769189473.252:932): pid=6227 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.462835 sshd[6227]: Connection closed by 10.200.16.10 port 59100 Jan 23 17:31:13.466474 sshd-session[6223]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:13.466000 audit[6223]: USER_END pid=6223 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.470951 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:59100.service: Deactivated successfully. Jan 23 17:31:13.474249 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 17:31:13.478438 systemd-logind[2081]: Session 27 logged out. Waiting for processes to exit. Jan 23 17:31:13.482252 systemd-logind[2081]: Removed session 27. Jan 23 17:31:13.466000 audit[6223]: CRED_DISP pid=6223 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.509956 kernel: audit: type=1106 audit(1769189473.466:933): pid=6223 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.510075 kernel: audit: type=1104 audit(1769189473.466:934): pid=6223 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:13.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.34:22-10.200.16.10:59100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:14.359699 kubelet[3639]: E0123 17:31:14.359515 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wq8fz" podUID="a12545f0-91c5-4708-a845-2a7a18a8616c" Jan 23 17:31:15.358982 kubelet[3639]: E0123 17:31:15.358923 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66f6568cfc-b7js4" podUID="d6234171-70b3-48b5-98d5-2c3cd8e41f24" Jan 23 17:31:15.359340 kubelet[3639]: E0123 17:31:15.359200 3639 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dtvct" podUID="25c5832d-778b-4f5d-974d-1be8e7376fdb"