Jan 23 17:30:43.674141 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 23 17:30:43.674160 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Jan 23 15:38:20 -00 2026 Jan 23 17:30:43.674167 kernel: KASLR enabled Jan 23 17:30:43.674171 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 23 17:30:43.674176 kernel: printk: legacy bootconsole [pl11] enabled Jan 23 17:30:43.674180 kernel: efi: EFI v2.7 by EDK II Jan 23 17:30:43.674186 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Jan 23 17:30:43.674190 kernel: random: crng init done Jan 23 17:30:43.674194 kernel: secureboot: Secure boot disabled Jan 23 17:30:43.674198 kernel: ACPI: Early table checksum verification disabled Jan 23 17:30:43.674202 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 23 17:30:43.674207 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674211 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674216 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 17:30:43.674222 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674226 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674230 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674236 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674240 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674244 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674249 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 23 17:30:43.674253 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 17:30:43.674258 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 23 17:30:43.674262 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:30:43.674266 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 23 17:30:43.674271 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 23 17:30:43.674275 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 23 17:30:43.674281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 23 17:30:43.674285 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 23 17:30:43.674290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 23 17:30:43.674294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 23 17:30:43.674299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 23 17:30:43.674303 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 23 17:30:43.674307 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 23 17:30:43.674312 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 23 17:30:43.674316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 23 17:30:43.674321 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 23 17:30:43.674325 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 23 17:30:43.674330 kernel: Zone ranges: Jan 23 17:30:43.674335 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 23 17:30:43.674341 kernel: DMA32 empty Jan 23 17:30:43.674346 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 17:30:43.674351 kernel: Device empty Jan 23 17:30:43.674356 kernel: Movable zone start for each node Jan 23 17:30:43.674361 kernel: Early memory node ranges Jan 23 17:30:43.674366 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 23 17:30:43.674371 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 23 17:30:43.674375 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 23 17:30:43.674380 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 23 17:30:43.674384 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 23 17:30:43.674389 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 23 17:30:43.674394 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 23 17:30:43.674399 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 23 17:30:43.674404 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 23 17:30:43.674409 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 23 17:30:43.674413 kernel: psci: probing for conduit method from ACPI. Jan 23 17:30:43.674418 kernel: psci: PSCIv1.3 detected in firmware. Jan 23 17:30:43.674423 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:30:43.674427 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 23 17:30:43.674432 kernel: psci: SMC Calling Convention v1.4 Jan 23 17:30:43.674437 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 23 17:30:43.674441 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 23 17:30:43.674446 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:30:43.674450 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:30:43.674456 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:30:43.674461 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:30:43.674466 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 23 17:30:43.674470 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:30:43.674475 kernel: CPU features: detected: Spectre-v4 Jan 23 17:30:43.674480 kernel: CPU features: detected: Spectre-BHB Jan 23 17:30:43.674484 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 17:30:43.674489 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 17:30:43.674494 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 23 17:30:43.674498 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 17:30:43.674504 kernel: alternatives: applying boot alternatives Jan 23 17:30:43.674510 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=35f959b0e84cd72dec35dcaa9fdae098b059a7436b8ff34bc604c87ac6375079 Jan 23 17:30:43.674515 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:30:43.674519 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:30:43.674524 kernel: Fallback order for Node 0: 0 Jan 23 17:30:43.674529 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 23 17:30:43.674533 kernel: Policy zone: Normal Jan 23 17:30:43.674538 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:30:43.674543 kernel: software IO TLB: area num 2. Jan 23 17:30:43.674547 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Jan 23 17:30:43.674552 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:30:43.674558 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:30:43.674563 kernel: rcu: RCU event tracing is enabled. Jan 23 17:30:43.674568 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:30:43.674573 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:30:43.674577 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:30:43.674582 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:30:43.674587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:30:43.674592 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:30:43.674596 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:30:43.674601 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:30:43.674606 kernel: GICv3: 960 SPIs implemented Jan 23 17:30:43.674611 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:30:43.674616 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:30:43.674621 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 23 17:30:43.674625 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 23 17:30:43.674630 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 23 17:30:43.674635 kernel: ITS: No ITS available, not enabling LPIs Jan 23 17:30:43.674639 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:30:43.674644 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 23 17:30:43.674649 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 17:30:43.674654 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 23 17:30:43.674659 kernel: Console: colour dummy device 80x25 Jan 23 17:30:43.674665 kernel: printk: legacy console [tty1] enabled Jan 23 17:30:43.674670 kernel: ACPI: Core revision 20240827 Jan 23 17:30:43.674675 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 23 17:30:43.674680 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:30:43.674685 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:30:43.674690 kernel: landlock: Up and running. Jan 23 17:30:43.674695 kernel: SELinux: Initializing. Jan 23 17:30:43.674701 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:30:43.674706 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:30:43.674711 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 23 17:30:43.674716 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 23 17:30:43.674724 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 17:30:43.674731 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:30:43.674736 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:30:43.674741 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:30:43.674746 kernel: Remapping and enabling EFI services. Jan 23 17:30:43.674752 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:30:43.674757 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:30:43.674763 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 23 17:30:43.674768 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 23 17:30:43.674774 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:30:43.674779 kernel: SMP: Total of 2 processors activated. Jan 23 17:30:43.674784 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:30:43.674789 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:30:43.674795 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 23 17:30:43.674800 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 17:30:43.674805 kernel: CPU features: detected: Common not Private translations Jan 23 17:30:43.674811 kernel: CPU features: detected: CRC32 instructions Jan 23 17:30:43.674816 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 23 17:30:43.674822 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 17:30:43.674827 kernel: CPU features: detected: LSE atomic instructions Jan 23 17:30:43.674832 kernel: CPU features: detected: Privileged Access Never Jan 23 17:30:43.674837 kernel: CPU features: detected: Speculation barrier (SB) Jan 23 17:30:43.674853 kernel: CPU features: detected: TLB range maintenance instructions Jan 23 17:30:43.674859 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 17:30:43.674864 kernel: CPU features: detected: Scalable Vector Extension Jan 23 17:30:43.674870 kernel: alternatives: applying system-wide alternatives Jan 23 17:30:43.674875 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 17:30:43.674880 kernel: SVE: maximum available vector length 16 bytes per vector Jan 23 17:30:43.674885 kernel: SVE: default vector length 16 bytes per vector Jan 23 17:30:43.674891 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Jan 23 17:30:43.674897 kernel: devtmpfs: initialized Jan 23 17:30:43.674902 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:30:43.674907 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:30:43.674913 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 17:30:43.674918 kernel: 0 pages in range for non-PLT usage Jan 23 17:30:43.674923 kernel: 515168 pages in range for PLT usage Jan 23 17:30:43.674928 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:30:43.674935 kernel: SMBIOS 3.1.0 present. Jan 23 17:30:43.674940 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 23 17:30:43.674945 kernel: DMI: Memory slots populated: 2/2 Jan 23 17:30:43.674950 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:30:43.674956 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:30:43.674961 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:30:43.674966 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:30:43.674972 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:30:43.674978 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 23 17:30:43.674983 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:30:43.674988 kernel: cpuidle: using governor menu Jan 23 17:30:43.674994 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:30:43.674999 kernel: ASID allocator initialised with 32768 entries Jan 23 17:30:43.675004 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:30:43.675009 kernel: Serial: AMBA PL011 UART driver Jan 23 17:30:43.675016 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:30:43.675021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:30:43.675026 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:30:43.675031 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:30:43.675036 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:30:43.675041 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:30:43.675047 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:30:43.675053 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:30:43.675058 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:30:43.675063 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:30:43.675068 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:30:43.675073 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:30:43.675079 kernel: ACPI: Interpreter enabled Jan 23 17:30:43.675084 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:30:43.675090 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 23 17:30:43.675095 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 17:30:43.675100 kernel: printk: legacy bootconsole [pl11] disabled Jan 23 17:30:43.675105 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 23 17:30:43.675111 kernel: ACPI: CPU0 has been hot-added Jan 23 17:30:43.675116 kernel: ACPI: CPU1 has been hot-added Jan 23 17:30:43.675121 kernel: iommu: Default domain type: Translated Jan 23 17:30:43.675127 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:30:43.675132 kernel: efivars: Registered efivars operations Jan 23 17:30:43.675137 kernel: vgaarb: loaded Jan 23 17:30:43.675142 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:30:43.675148 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:30:43.675153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:30:43.675158 kernel: pnp: PnP ACPI init Jan 23 17:30:43.675164 kernel: pnp: PnP ACPI: found 0 devices Jan 23 17:30:43.675169 kernel: NET: Registered PF_INET protocol family Jan 23 17:30:43.675174 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:30:43.675179 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:30:43.675185 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:30:43.675190 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:30:43.675195 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:30:43.675201 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:30:43.675206 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:30:43.675212 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:30:43.675217 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:30:43.675222 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:30:43.675227 kernel: kvm [1]: HYP mode not available Jan 23 17:30:43.675232 kernel: Initialise system trusted keyrings Jan 23 17:30:43.675237 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:30:43.675243 kernel: Key type asymmetric registered Jan 23 17:30:43.675248 kernel: Asymmetric key parser 'x509' registered Jan 23 17:30:43.675254 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:30:43.675259 kernel: io scheduler mq-deadline registered Jan 23 17:30:43.675264 kernel: io scheduler kyber registered Jan 23 17:30:43.675269 kernel: io scheduler bfq registered Jan 23 17:30:43.675275 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:30:43.675280 kernel: thunder_xcv, ver 1.0 Jan 23 17:30:43.675285 kernel: thunder_bgx, ver 1.0 Jan 23 17:30:43.675291 kernel: nicpf, ver 1.0 Jan 23 17:30:43.675296 kernel: nicvf, ver 1.0 Jan 23 17:30:43.675434 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:30:43.675501 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:30:39 UTC (1769189439) Jan 23 17:30:43.675510 kernel: efifb: probing for efifb Jan 23 17:30:43.675515 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 17:30:43.675521 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 17:30:43.675526 kernel: efifb: scrolling: redraw Jan 23 17:30:43.675531 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 17:30:43.675536 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 17:30:43.675542 kernel: fb0: EFI VGA frame buffer device Jan 23 17:30:43.675548 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 23 17:30:43.675553 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:30:43.675558 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 17:30:43.675564 kernel: watchdog: NMI not fully supported Jan 23 17:30:43.675569 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:30:43.675574 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:30:43.675579 kernel: Segment Routing with IPv6 Jan 23 17:30:43.675585 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:30:43.675590 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:30:43.675595 kernel: Key type dns_resolver registered Jan 23 17:30:43.675600 kernel: registered taskstats version 1 Jan 23 17:30:43.675606 kernel: Loading compiled-in X.509 certificates Jan 23 17:30:43.675611 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2bef814d3854848add18d21bd2681c3d03c60f56' Jan 23 17:30:43.675616 kernel: Demotion targets for Node 0: null Jan 23 17:30:43.675622 kernel: Key type .fscrypt registered Jan 23 17:30:43.675627 kernel: Key type fscrypt-provisioning registered Jan 23 17:30:43.675632 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:30:43.675637 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:30:43.675642 kernel: ima: No architecture policies found Jan 23 17:30:43.675648 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:30:43.675653 kernel: clk: Disabling unused clocks Jan 23 17:30:43.675658 kernel: PM: genpd: Disabling unused power domains Jan 23 17:30:43.675664 kernel: Freeing unused kernel memory: 12480K Jan 23 17:30:43.675669 kernel: Run /init as init process Jan 23 17:30:43.675674 kernel: with arguments: Jan 23 17:30:43.675679 kernel: /init Jan 23 17:30:43.675684 kernel: with environment: Jan 23 17:30:43.675689 kernel: HOME=/ Jan 23 17:30:43.675694 kernel: TERM=linux Jan 23 17:30:43.675700 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 17:30:43.675705 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 17:30:43.675711 kernel: SCSI subsystem initialized Jan 23 17:30:43.675716 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 23 17:30:43.675802 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 17:30:43.675809 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 17:30:43.675816 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 23 17:30:43.675821 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 17:30:43.675827 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 17:30:43.675832 kernel: PTP clock support registered Jan 23 17:30:43.675837 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 17:30:43.675852 kernel: hv_vmbus: registering driver hv_utils Jan 23 17:30:43.675857 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 17:30:43.675864 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 17:30:43.675869 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 17:30:43.675874 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 17:30:43.676018 kernel: scsi host0: storvsc_host_t Jan 23 17:30:43.676101 kernel: scsi host1: storvsc_host_t Jan 23 17:30:43.676188 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 23 17:30:43.676272 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 17:30:43.676347 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 23 17:30:43.676420 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 23 17:30:43.676500 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 17:30:43.676574 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 23 17:30:43.676647 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 23 17:30:43.676728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#257 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 17:30:43.676796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#264 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 23 17:30:43.676803 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:30:43.676884 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 17:30:43.676959 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 17:30:43.676967 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 17:30:43.677039 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 17:30:43.677046 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:30:43.677051 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:30:43.677057 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:30:43.677062 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:30:43.677067 kernel: raid6: neonx8 gen() 18562 MB/s Jan 23 17:30:43.677074 kernel: raid6: neonx4 gen() 18584 MB/s Jan 23 17:30:43.677079 kernel: raid6: neonx2 gen() 17102 MB/s Jan 23 17:30:43.677084 kernel: raid6: neonx1 gen() 15062 MB/s Jan 23 17:30:43.677089 kernel: raid6: int64x8 gen() 10527 MB/s Jan 23 17:30:43.677094 kernel: raid6: int64x4 gen() 10606 MB/s Jan 23 17:30:43.677099 kernel: raid6: int64x2 gen() 8989 MB/s Jan 23 17:30:43.677104 kernel: raid6: int64x1 gen() 6990 MB/s Jan 23 17:30:43.677110 kernel: raid6: using algorithm neonx4 gen() 18584 MB/s Jan 23 17:30:43.677116 kernel: raid6: .... xor() 15131 MB/s, rmw enabled Jan 23 17:30:43.677121 kernel: raid6: using neon recovery algorithm Jan 23 17:30:43.677126 kernel: xor: measuring software checksum speed Jan 23 17:30:43.677131 kernel: 8regs : 28633 MB/sec Jan 23 17:30:43.677136 kernel: 32regs : 28741 MB/sec Jan 23 17:30:43.677141 kernel: arm64_neon : 36607 MB/sec Jan 23 17:30:43.677147 kernel: xor: using function: arm64_neon (36607 MB/sec) Jan 23 17:30:43.677153 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:30:43.677158 kernel: BTRFS: device fsid 8d2a73a7-ed2a-4757-891b-9df844aa914e devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (380) Jan 23 17:30:43.677164 kernel: BTRFS info (device dm-0): first mount of filesystem 8d2a73a7-ed2a-4757-891b-9df844aa914e Jan 23 17:30:43.677169 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:30:43.677174 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:30:43.677180 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:30:43.677185 kernel: loop: module loaded Jan 23 17:30:43.677191 kernel: loop0: detected capacity change from 0 to 91840 Jan 23 17:30:43.677196 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:30:43.677203 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:30:43.677210 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:30:43.677216 systemd[1]: Detected virtualization microsoft. Jan 23 17:30:43.677222 systemd[1]: Detected architecture arm64. Jan 23 17:30:43.677228 systemd[1]: Running in initrd. Jan 23 17:30:43.677234 systemd[1]: No hostname configured, using default hostname. Jan 23 17:30:43.677239 systemd[1]: Hostname set to . Jan 23 17:30:43.677245 systemd[1]: Initializing machine ID from random generator. Jan 23 17:30:43.677251 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:30:43.677256 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:30:43.677263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:30:43.677268 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:30:43.677275 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:30:43.677280 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:30:43.677286 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:30:43.677292 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:30:43.677299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:30:43.677305 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:30:43.677311 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:30:43.677316 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:30:43.677322 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:30:43.677328 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:30:43.677333 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:30:43.677340 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:30:43.677346 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:30:43.677352 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:30:43.677357 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:30:43.677363 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:30:43.677369 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:30:43.677381 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:30:43.677387 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:30:43.677393 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:30:43.677399 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:30:43.677405 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:30:43.677412 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:30:43.677418 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:30:43.677424 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:30:43.677430 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:30:43.677436 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:30:43.677442 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:30:43.677449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:30:43.677469 systemd-journald[517]: Collecting audit messages is enabled. Jan 23 17:30:43.677486 systemd-journald[517]: Journal started Jan 23 17:30:43.677500 systemd-journald[517]: Runtime Journal (/run/log/journal/2dd90dfa946f49ac8651a36b7666e3ec) is 8M, max 78.3M, 70.3M free. Jan 23 17:30:43.695235 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:30:43.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.707748 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:30:43.740964 kernel: audit: type=1130 audit(1769189443.694:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.740999 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:30:43.741007 kernel: audit: type=1130 audit(1769189443.717:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.741015 kernel: Bridge firewalling registered Jan 23 17:30:43.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.718573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:30:43.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.739813 systemd-modules-load[520]: Inserted module 'br_netfilter' Jan 23 17:30:43.779525 kernel: audit: type=1130 audit(1769189443.744:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.779555 kernel: audit: type=1130 audit(1769189443.762:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.745198 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:30:43.799514 kernel: audit: type=1130 audit(1769189443.784:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.763238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:30:43.787709 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:30:43.810770 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:30:43.827917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:30:43.841910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:30:43.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.859078 systemd-tmpfiles[532]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:30:43.869011 kernel: audit: type=1130 audit(1769189443.849:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.868749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:30:43.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.887816 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:30:43.894220 kernel: audit: type=1130 audit(1769189443.872:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.913863 kernel: audit: type=1130 audit(1769189443.898:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.911696 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:30:43.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.918971 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:30:43.943701 kernel: audit: type=1130 audit(1769189443.915:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:43.951000 audit: BPF prog-id=6 op=LOAD Jan 23 17:30:43.952826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:30:43.962199 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:30:43.989602 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:30:43.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.000423 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:30:44.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.027004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:30:44.066050 systemd-resolved[540]: Positive Trust Anchors: Jan 23 17:30:44.066067 systemd-resolved[540]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:30:44.066070 systemd-resolved[540]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 17:30:44.066089 systemd-resolved[540]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:30:44.081907 systemd-resolved[540]: Defaulting to hostname 'linux'. Jan 23 17:30:44.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.082748 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:30:44.115673 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:30:44.133899 dracut-cmdline[555]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=35f959b0e84cd72dec35dcaa9fdae098b059a7436b8ff34bc604c87ac6375079 Jan 23 17:30:44.272870 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:30:44.313869 kernel: iscsi: registered transport (tcp) Jan 23 17:30:44.343542 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:30:44.343557 kernel: QLogic iSCSI HBA Driver Jan 23 17:30:44.390123 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:30:44.411449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:30:44.426254 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 23 17:30:44.426276 kernel: audit: type=1130 audit(1769189444.416:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.441432 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:30:44.484589 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:30:44.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.490315 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:30:44.517088 kernel: audit: type=1130 audit(1769189444.488:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.512434 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:30:44.544369 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:30:44.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.563000 audit: BPF prog-id=7 op=LOAD Jan 23 17:30:44.568193 kernel: audit: type=1130 audit(1769189444.547:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.568238 kernel: audit: type=1334 audit(1769189444.563:18): prog-id=7 op=LOAD Jan 23 17:30:44.568930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:30:44.582139 kernel: audit: type=1334 audit(1769189444.563:19): prog-id=8 op=LOAD Jan 23 17:30:44.563000 audit: BPF prog-id=8 op=LOAD Jan 23 17:30:44.667559 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:30:44.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.689880 kernel: audit: type=1130 audit(1769189444.671:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.706673 systemd-udevd[790]: Using default interface naming scheme 'v257'. Jan 23 17:30:44.716269 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:30:44.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.728002 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:30:44.749682 kernel: audit: type=1130 audit(1769189444.723:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.759991 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:30:44.758000 audit: BPF prog-id=9 op=LOAD Jan 23 17:30:44.772596 kernel: audit: type=1334 audit(1769189444.758:22): prog-id=9 op=LOAD Jan 23 17:30:44.777398 dracut-pre-trigger[900]: rd.md=0: removing MD RAID activation Jan 23 17:30:44.807887 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:30:44.815177 systemd-networkd[901]: lo: Link UP Jan 23 17:30:44.845741 kernel: audit: type=1130 audit(1769189444.816:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.845772 kernel: audit: type=1130 audit(1769189444.830:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.815180 systemd-networkd[901]: lo: Gained carrier Jan 23 17:30:44.817056 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:30:44.832650 systemd[1]: Reached target network.target - Network. Jan 23 17:30:44.852030 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:30:44.915661 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:30:44.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:44.928439 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:30:45.001873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#266 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 17:30:45.022512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:30:45.026008 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:30:45.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:45.031396 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:30:45.055250 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 17:30:45.037977 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:30:45.070235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:30:45.070345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:30:45.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:45.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:45.084938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:30:45.108992 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:30:45.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:45.125860 kernel: hv_netvsc 0022487a-0cdf-0022-487a-0cdf0022487a eth0: VF slot 1 added Jan 23 17:30:45.136487 systemd-networkd[901]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:30:45.145208 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:30:45.146132 systemd-networkd[901]: eth0: Link UP Jan 23 17:30:45.146272 systemd-networkd[901]: eth0: Gained carrier Jan 23 17:30:45.164678 kernel: hv_vmbus: registering driver hv_pci Jan 23 17:30:45.164702 kernel: hv_pci e4c1fcfe-a2c0-4f30-94a1-3e2b5d247c1b: PCI VMBus probing: Using version 0x10004 Jan 23 17:30:45.146286 systemd-networkd[901]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:30:45.183280 kernel: hv_pci e4c1fcfe-a2c0-4f30-94a1-3e2b5d247c1b: PCI host bridge to bus a2c0:00 Jan 23 17:30:45.183476 kernel: pci_bus a2c0:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 23 17:30:45.183588 kernel: pci_bus a2c0:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 17:30:45.193161 kernel: pci a2c0:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 23 17:30:45.194896 systemd-networkd[901]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 17:30:45.207109 kernel: pci a2c0:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 23 17:30:45.207202 kernel: pci a2c0:00:02.0: enabling Extended Tags Jan 23 17:30:45.221872 kernel: pci a2c0:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a2c0:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 23 17:30:45.230660 kernel: pci_bus a2c0:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 17:30:45.230824 kernel: pci a2c0:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 23 17:30:45.411439 kernel: mlx5_core a2c0:00:02.0: enabling device (0000 -> 0002) Jan 23 17:30:45.419443 kernel: mlx5_core a2c0:00:02.0: PTM is not supported by PCIe Jan 23 17:30:45.419705 kernel: mlx5_core a2c0:00:02.0: firmware version: 16.30.5026 Jan 23 17:30:45.489786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 17:30:45.513452 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 23 17:30:45.545637 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 23 17:30:45.600266 kernel: hv_netvsc 0022487a-0cdf-0022-487a-0cdf0022487a eth0: VF registering: eth1 Jan 23 17:30:45.600509 kernel: mlx5_core a2c0:00:02.0 eth1: joined to eth0 Jan 23 17:30:45.606917 kernel: mlx5_core a2c0:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 23 17:30:45.613974 kernel: mlx5_core a2c0:00:02.0 enP41664s1: renamed from eth1 Jan 23 17:30:45.613951 systemd-networkd[901]: eth1: Interface name change detected, renamed to enP41664s1. Jan 23 17:30:45.747862 kernel: mlx5_core a2c0:00:02.0 enP41664s1: Link up Jan 23 17:30:45.775619 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 23 17:30:45.788950 kernel: hv_netvsc 0022487a-0cdf-0022-487a-0cdf0022487a eth0: Data path switched to VF: enP41664s1 Jan 23 17:30:45.791357 systemd-networkd[901]: enP41664s1: Link UP Jan 23 17:30:45.791839 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:30:45.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:45.799370 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:30:45.807655 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:30:45.817040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:30:45.826702 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:30:45.849781 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:30:45.928714 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:30:45.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:46.153981 systemd-networkd[901]: enP41664s1: Gained carrier Jan 23 17:30:47.001238 systemd-networkd[901]: eth0: Gained IPv6LL Jan 23 17:30:47.148864 disk-uuid[1061]: Warning: The kernel is still using the old partition table. Jan 23 17:30:47.148864 disk-uuid[1061]: The new table will be used at the next reboot or after you Jan 23 17:30:47.148864 disk-uuid[1061]: run partprobe(8) or kpartx(8) Jan 23 17:30:47.148864 disk-uuid[1061]: The operation has completed successfully. Jan 23 17:30:47.161322 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:30:47.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:47.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:47.164916 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:30:47.173003 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:30:47.229030 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1176) Jan 23 17:30:47.238696 kernel: BTRFS info (device sda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:30:47.238737 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:30:47.262547 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:30:47.262574 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:30:47.271868 kernel: BTRFS info (device sda6): last unmount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:30:47.272612 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:30:47.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:47.282630 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:30:48.853692 ignition[1195]: Ignition 2.24.0 Jan 23 17:30:48.853703 ignition[1195]: Stage: fetch-offline Jan 23 17:30:48.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:48.857710 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:30:48.853859 ignition[1195]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:48.865818 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:30:48.853869 ignition[1195]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:30:48.854145 ignition[1195]: parsed url from cmdline: "" Jan 23 17:30:48.854156 ignition[1195]: no config URL provided Jan 23 17:30:48.854209 ignition[1195]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:30:48.854222 ignition[1195]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:30:48.854226 ignition[1195]: failed to fetch config: resource requires networking Jan 23 17:30:48.854427 ignition[1195]: Ignition finished successfully Jan 23 17:30:48.900109 ignition[1201]: Ignition 2.24.0 Jan 23 17:30:48.900114 ignition[1201]: Stage: fetch Jan 23 17:30:48.900352 ignition[1201]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:48.900359 ignition[1201]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:30:48.900438 ignition[1201]: parsed url from cmdline: "" Jan 23 17:30:48.900441 ignition[1201]: no config URL provided Jan 23 17:30:48.900444 ignition[1201]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:30:48.900449 ignition[1201]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:30:48.900469 ignition[1201]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 17:30:48.973705 ignition[1201]: GET result: OK Jan 23 17:30:48.973775 ignition[1201]: config has been read from IMDS userdata Jan 23 17:30:48.973788 ignition[1201]: parsing config with SHA512: 097b8d6b0100935529f75dfd783008e2cc4bc629e26c95324b966f70d5bae5c38d766dd6f48790b34f557a07ea1708f6050b9198a9140d2ee2c8ee856b001fa0 Jan 23 17:30:48.980979 unknown[1201]: fetched base config from "system" Jan 23 17:30:48.980988 unknown[1201]: fetched base config from "system" Jan 23 17:30:48.981238 ignition[1201]: fetch: fetch complete Jan 23 17:30:48.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:48.980992 unknown[1201]: fetched user config from "azure" Jan 23 17:30:48.981241 ignition[1201]: fetch: fetch passed Jan 23 17:30:48.985606 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:30:48.981276 ignition[1201]: Ignition finished successfully Jan 23 17:30:48.993905 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:30:49.025336 ignition[1207]: Ignition 2.24.0 Jan 23 17:30:49.025350 ignition[1207]: Stage: kargs Jan 23 17:30:49.030221 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:30:49.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:49.025577 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:49.036437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:30:49.025588 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:30:49.026257 ignition[1207]: kargs: kargs passed Jan 23 17:30:49.026302 ignition[1207]: Ignition finished successfully Jan 23 17:30:49.062868 ignition[1213]: Ignition 2.24.0 Jan 23 17:30:49.062880 ignition[1213]: Stage: disks Jan 23 17:30:49.066873 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:30:49.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:49.063091 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:49.072721 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:30:49.063098 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:30:49.080421 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:30:49.063772 ignition[1213]: disks: disks passed Jan 23 17:30:49.089007 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:30:49.063816 ignition[1213]: Ignition finished successfully Jan 23 17:30:49.097443 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:30:49.105959 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:30:49.115186 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:30:49.231427 systemd-fsck[1221]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 23 17:30:49.239771 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:30:49.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:49.247978 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:30:49.536869 kernel: EXT4-fs (sda9): mounted filesystem 6e8555bb-6998-46ec-8ba6-5a7a415f09ac r/w with ordered data mode. Quota mode: none. Jan 23 17:30:49.537261 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:30:49.540995 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:30:49.574990 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:30:49.579588 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:30:49.594914 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 17:30:49.599961 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:30:49.600003 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:30:49.622072 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:30:49.632014 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:30:49.659037 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1235) Jan 23 17:30:49.659086 kernel: BTRFS info (device sda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:30:49.664474 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:30:49.673731 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:30:49.673789 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:30:49.676064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:30:50.223494 coreos-metadata[1237]: Jan 23 17:30:50.223 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 17:30:50.230023 coreos-metadata[1237]: Jan 23 17:30:50.229 INFO Fetch successful Jan 23 17:30:50.230023 coreos-metadata[1237]: Jan 23 17:30:50.230 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 17:30:50.242564 coreos-metadata[1237]: Jan 23 17:30:50.242 INFO Fetch successful Jan 23 17:30:50.256902 coreos-metadata[1237]: Jan 23 17:30:50.256 INFO wrote hostname ci-4547.1.0-a-71c1b0067a to /sysroot/etc/hostname Jan 23 17:30:50.264928 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:30:50.278612 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 23 17:30:50.278635 kernel: audit: type=1130 audit(1769189450.270:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:50.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:51.849506 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:30:51.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:51.861995 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:30:51.879123 kernel: audit: type=1130 audit(1769189451.857:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:51.885675 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:30:51.898611 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:30:51.908198 kernel: BTRFS info (device sda6): last unmount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:30:51.927901 ignition[1338]: INFO : Ignition 2.24.0 Jan 23 17:30:51.932147 ignition[1338]: INFO : Stage: mount Jan 23 17:30:51.932147 ignition[1338]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:51.932147 ignition[1338]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:30:51.977740 kernel: audit: type=1130 audit(1769189451.940:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:51.977767 kernel: audit: type=1130 audit(1769189451.960:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:51.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:51.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:51.977827 ignition[1338]: INFO : mount: mount passed Jan 23 17:30:51.977827 ignition[1338]: INFO : Ignition finished successfully Jan 23 17:30:51.935954 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:30:51.941809 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:30:51.963834 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:30:52.001086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:30:52.044865 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1349) Jan 23 17:30:52.044935 kernel: BTRFS info (device sda6): first mount of filesystem 604c215e-feca-417a-a119-9b36e3a162e8 Jan 23 17:30:52.049239 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:30:52.059334 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:30:52.059374 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:30:52.060975 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:30:52.088055 ignition[1366]: INFO : Ignition 2.24.0 Jan 23 17:30:52.088055 ignition[1366]: INFO : Stage: files Jan 23 17:30:52.094804 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:52.094804 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:30:52.094804 ignition[1366]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:30:52.117146 ignition[1366]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:30:52.117146 ignition[1366]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:30:52.178826 ignition[1366]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:30:52.185462 ignition[1366]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:30:52.185462 ignition[1366]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:30:52.185234 unknown[1366]: wrote ssh authorized keys file for user: core Jan 23 17:30:52.211156 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:30:52.219596 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 17:30:52.248159 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:30:52.447908 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:30:52.447908 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:30:52.463480 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 17:30:53.110670 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:30:54.726150 ignition[1366]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:30:54.726150 ignition[1366]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:30:54.888150 ignition[1366]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:30:54.904736 ignition[1366]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:30:54.904736 ignition[1366]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:30:54.918727 ignition[1366]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:30:54.918727 ignition[1366]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:30:54.918727 ignition[1366]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:30:54.918727 ignition[1366]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:30:54.918727 ignition[1366]: INFO : files: files passed Jan 23 17:30:54.918727 ignition[1366]: INFO : Ignition finished successfully Jan 23 17:30:54.982413 kernel: audit: type=1130 audit(1769189454.923:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.915011 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:30:54.925820 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:30:54.976693 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:30:54.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.987215 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:30:55.034236 kernel: audit: type=1130 audit(1769189454.996:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.034272 kernel: audit: type=1131 audit(1769189454.996:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:54.989414 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:30:55.044893 initrd-setup-root-after-ignition[1397]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:30:55.044893 initrd-setup-root-after-ignition[1397]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:30:55.060797 initrd-setup-root-after-ignition[1401]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:30:55.070008 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:30:55.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.096583 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:30:55.106150 kernel: audit: type=1130 audit(1769189455.074:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.107052 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:30:55.163751 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:30:55.163894 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:30:55.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.173344 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:30:55.212361 kernel: audit: type=1130 audit(1769189455.172:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.212383 kernel: audit: type=1131 audit(1769189455.172:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.196152 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:30:55.217389 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:30:55.222040 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:30:55.265685 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:30:55.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.297050 kernel: audit: type=1130 audit(1769189455.276:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.291488 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:30:55.311712 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:30:55.312957 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:30:55.324124 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:30:55.335141 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:30:55.345399 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:30:55.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.345594 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:30:55.374454 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:30:55.384695 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:30:55.398437 kernel: audit: type=1131 audit(1769189455.354:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.393399 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:30:55.403585 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:30:55.414625 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:30:55.425244 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:30:55.435203 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:30:55.445476 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:30:55.456122 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:30:55.467795 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:30:55.477190 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:30:55.485541 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:30:55.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.485698 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:30:55.515429 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:30:55.520025 kernel: audit: type=1131 audit(1769189455.493:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.525619 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:30:55.537700 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:30:55.542425 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:30:55.548482 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:30:55.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.548647 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:30:55.577782 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:30:55.610055 kernel: audit: type=1131 audit(1769189455.558:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.610079 kernel: audit: type=1131 audit(1769189455.588:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.577962 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:30:55.610366 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:30:55.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.610512 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:30:55.662096 kernel: audit: type=1131 audit(1769189455.621:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.662117 kernel: audit: type=1131 audit(1769189455.645:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.622601 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 17:30:55.622700 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:30:55.668085 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:30:55.695143 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:30:55.702466 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:30:55.714886 ignition[1421]: INFO : Ignition 2.24.0 Jan 23 17:30:55.714886 ignition[1421]: INFO : Stage: umount Jan 23 17:30:55.714886 ignition[1421]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:30:55.714886 ignition[1421]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 17:30:55.714886 ignition[1421]: INFO : umount: umount passed Jan 23 17:30:55.714886 ignition[1421]: INFO : Ignition finished successfully Jan 23 17:30:55.810166 kernel: audit: type=1131 audit(1769189455.722:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.810197 kernel: audit: type=1131 audit(1769189455.746:58): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.810213 kernel: audit: type=1131 audit(1769189455.774:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.703018 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:30:55.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.723424 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:30:55.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.723523 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:30:55.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.747428 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:30:55.747532 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:30:55.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.795191 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:30:55.795284 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:30:55.805892 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:30:55.806118 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:30:55.815544 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:30:55.815608 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:30:55.825474 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:30:55.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.825527 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:30:55.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.835097 systemd[1]: Stopped target network.target - Network. Jan 23 17:30:55.845162 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:30:55.845234 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:30:55.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.855893 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:30:55.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.865399 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:30:55.869372 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:30:55.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.875569 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:30:55.884693 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:30:56.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.008000 audit: BPF prog-id=6 op=UNLOAD Jan 23 17:30:56.008000 audit: BPF prog-id=9 op=UNLOAD Jan 23 17:30:55.894260 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:30:55.894317 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:30:55.899437 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:30:56.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.899487 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:30:55.908684 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 23 17:30:56.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.908703 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:30:56.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.918170 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:30:56.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.918227 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:30:55.926278 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:30:55.926311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:30:55.935687 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:30:55.944501 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:30:55.954805 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:30:56.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.955462 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:30:55.955547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:30:55.966858 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:30:56.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.966935 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:30:55.980791 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:30:56.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.980921 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:30:55.998324 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:30:56.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:55.998432 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:30:56.007765 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:30:56.015109 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:30:56.214975 kernel: hv_netvsc 0022487a-0cdf-0022-487a-0cdf0022487a eth0: Data path switched from VF: enP41664s1 Jan 23 17:30:56.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.015154 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:30:56.023645 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:30:56.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.023700 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:30:56.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.033451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:30:56.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.043740 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:30:56.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.048010 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:30:56.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.056729 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:30:56.056784 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:30:56.073423 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:30:56.073487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:30:56.083840 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:30:56.109918 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:30:56.110093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:30:56.124377 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:30:56.124428 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:30:56.132708 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:30:56.132734 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:30:56.141194 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:30:56.141245 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:30:56.155404 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:30:56.155444 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:30:56.169977 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:30:56.170026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:30:56.184341 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:30:56.201544 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:30:56.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:30:56.201623 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:30:56.211156 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:30:56.211216 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:30:56.228506 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 17:30:56.228569 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:30:56.238567 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:30:56.238622 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:30:56.247840 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:30:56.247893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:30:56.258078 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:30:56.258170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:30:56.346926 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:30:56.347079 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:30:56.362068 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:30:56.371485 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:30:56.406308 systemd[1]: Switching root. Jan 23 17:30:56.493552 systemd-journald[517]: Journal stopped Jan 23 17:31:01.963315 systemd-journald[517]: Received SIGTERM from PID 1 (systemd). Jan 23 17:31:01.963337 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:31:01.963345 kernel: SELinux: policy capability open_perms=1 Jan 23 17:31:01.963353 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:31:01.963358 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:31:01.963364 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:31:01.963370 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:31:01.963376 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:31:01.963381 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:31:01.963388 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:31:01.963394 systemd[1]: Successfully loaded SELinux policy in 144.654ms. Jan 23 17:31:01.963401 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.625ms. Jan 23 17:31:01.963408 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:31:01.963416 systemd[1]: Detected virtualization microsoft. Jan 23 17:31:01.963424 systemd[1]: Detected architecture arm64. Jan 23 17:31:01.963430 systemd[1]: Detected first boot. Jan 23 17:31:01.963437 systemd[1]: Hostname set to . Jan 23 17:31:01.963443 systemd[1]: Initializing machine ID from random generator. Jan 23 17:31:01.963449 zram_generator::config[1463]: No configuration found. Jan 23 17:31:01.963457 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:31:01.963463 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:31:01.963469 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 23 17:31:01.963475 kernel: audit: type=1334 audit(1769189461.162:96): prog-id=12 op=LOAD Jan 23 17:31:01.963481 kernel: audit: type=1334 audit(1769189461.162:97): prog-id=3 op=UNLOAD Jan 23 17:31:01.963487 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:31:01.963494 kernel: audit: type=1334 audit(1769189461.162:98): prog-id=13 op=LOAD Jan 23 17:31:01.963500 kernel: audit: type=1334 audit(1769189461.162:99): prog-id=14 op=LOAD Jan 23 17:31:01.963506 kernel: audit: type=1334 audit(1769189461.162:100): prog-id=4 op=UNLOAD Jan 23 17:31:01.963512 kernel: audit: type=1334 audit(1769189461.162:101): prog-id=5 op=UNLOAD Jan 23 17:31:01.963518 kernel: audit: type=1131 audit(1769189461.165:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.963525 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:31:01.963531 kernel: audit: type=1334 audit(1769189461.181:103): prog-id=12 op=UNLOAD Jan 23 17:31:01.963538 kernel: audit: type=1130 audit(1769189461.216:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.963544 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:31:01.963551 kernel: audit: type=1131 audit(1769189461.216:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.963558 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:31:01.963565 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:31:01.963572 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:31:01.963579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:31:01.963586 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:31:01.963594 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:31:01.963601 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:31:01.963607 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:31:01.963615 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:31:01.963622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:31:01.963628 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:31:01.963635 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:31:01.963642 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:31:01.963649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:31:01.963655 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 17:31:01.963663 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:31:01.963669 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:31:01.963676 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:31:01.963683 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:31:01.963689 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:31:01.963698 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:31:01.963705 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:31:01.963711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:31:01.963718 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 23 17:31:01.963724 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:31:01.963731 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:31:01.963737 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:31:01.963745 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:31:01.963752 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:31:01.963759 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 23 17:31:01.963766 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 23 17:31:01.963773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:31:01.963780 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 23 17:31:01.963787 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 23 17:31:01.963793 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:31:01.963800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:31:01.963807 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:31:01.963814 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:31:01.963821 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:31:01.963828 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:31:01.963834 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:31:01.963859 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:31:01.963866 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:31:01.963874 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:31:01.963882 systemd[1]: Reached target machines.target - Containers. Jan 23 17:31:01.963889 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:31:01.963895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:01.963902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:31:01.963909 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:31:01.963916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:31:01.963923 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:31:01.963930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:31:01.963937 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:31:01.963944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:31:01.963950 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:31:01.963957 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:31:01.963964 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:31:01.963971 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:31:01.963978 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:31:01.963985 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:01.963992 kernel: fuse: init (API version 7.41) Jan 23 17:31:01.963998 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:31:01.964005 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:31:01.964011 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:31:01.964032 systemd-journald[1545]: Collecting audit messages is enabled. Jan 23 17:31:01.964047 systemd-journald[1545]: Journal started Jan 23 17:31:01.964063 systemd-journald[1545]: Runtime Journal (/run/log/journal/fee59dd977584f86b44dd31b62fb43f4) is 8M, max 78.3M, 70.3M free. Jan 23 17:31:01.543000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 23 17:31:01.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:01.888000 audit: BPF prog-id=14 op=UNLOAD Jan 23 17:31:01.888000 audit: BPF prog-id=13 op=UNLOAD Jan 23 17:31:01.890000 audit: BPF prog-id=15 op=LOAD Jan 23 17:31:01.891000 audit: BPF prog-id=16 op=LOAD Jan 23 17:31:01.891000 audit: BPF prog-id=17 op=LOAD Jan 23 17:31:01.958000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 23 17:31:01.958000 audit[1545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe4597fd0 a2=4000 a3=0 items=0 ppid=1 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:01.958000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 23 17:31:01.157770 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:31:01.163529 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 17:31:01.166907 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:31:01.167238 systemd[1]: systemd-journald.service: Consumed 2.639s CPU time. Jan 23 17:31:01.975621 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:31:01.975703 kernel: ACPI: bus type drm_connector registered Jan 23 17:31:02.000872 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:31:02.013952 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:31:02.027140 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:31:02.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.029360 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:31:02.033815 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:31:02.038748 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:31:02.042734 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:31:02.047094 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:31:02.051541 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:31:02.056933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:31:02.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.063257 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:31:02.063406 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:31:02.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.069122 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:31:02.069262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:31:02.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.074386 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:31:02.075889 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:31:02.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.080333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:31:02.080463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:31:02.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.086135 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:31:02.086269 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:31:02.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.091637 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:31:02.091767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:31:02.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.096798 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:31:02.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.106226 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:31:02.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.119126 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:31:02.124583 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 23 17:31:02.130759 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:31:02.142963 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:31:02.153325 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:31:02.153440 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:31:02.160232 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:31:02.166157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:02.166340 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:02.169011 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:31:02.180929 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:31:02.186636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:31:02.187729 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:31:02.193113 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:31:02.196985 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:31:02.205683 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:31:02.211353 systemd-journald[1545]: Time spent on flushing to /var/log/journal/fee59dd977584f86b44dd31b62fb43f4 is 20.155ms for 1080 entries. Jan 23 17:31:02.211353 systemd-journald[1545]: System Journal (/var/log/journal/fee59dd977584f86b44dd31b62fb43f4) is 8M, max 2.2G, 2.2G free. Jan 23 17:31:02.260954 systemd-journald[1545]: Received client request to flush runtime journal. Jan 23 17:31:02.260993 kernel: loop1: detected capacity change from 0 to 200800 Jan 23 17:31:02.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.220060 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:31:02.225873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:31:02.233712 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:31:02.240305 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:31:02.246430 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:31:02.255046 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:31:02.261592 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:31:02.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.267080 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:31:02.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.277044 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:31:02.285011 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:31:02.292135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:31:02.321232 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 23 17:31:02.321246 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 23 17:31:02.329952 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:31:02.338183 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:31:02.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.366949 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:31:02.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.379882 kernel: loop2: detected capacity change from 0 to 45344 Jan 23 17:31:02.401434 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:31:02.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.477024 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:31:02.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.482000 audit: BPF prog-id=18 op=LOAD Jan 23 17:31:02.482000 audit: BPF prog-id=19 op=LOAD Jan 23 17:31:02.482000 audit: BPF prog-id=20 op=LOAD Jan 23 17:31:02.483734 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 23 17:31:02.488000 audit: BPF prog-id=21 op=LOAD Jan 23 17:31:02.490108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:31:02.495366 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:31:02.503000 audit: BPF prog-id=22 op=LOAD Jan 23 17:31:02.503000 audit: BPF prog-id=23 op=LOAD Jan 23 17:31:02.503000 audit: BPF prog-id=24 op=LOAD Jan 23 17:31:02.505995 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 23 17:31:02.511000 audit: BPF prog-id=25 op=LOAD Jan 23 17:31:02.516085 systemd-tmpfiles[1628]: ACLs are not supported, ignoring. Jan 23 17:31:02.516102 systemd-tmpfiles[1628]: ACLs are not supported, ignoring. Jan 23 17:31:02.518000 audit: BPF prog-id=26 op=LOAD Jan 23 17:31:02.518000 audit: BPF prog-id=27 op=LOAD Jan 23 17:31:02.519697 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:31:02.529723 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:31:02.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.557326 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:31:02.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.570985 systemd-nsresourced[1630]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 23 17:31:02.574575 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 23 17:31:02.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.643823 systemd-oomd[1626]: No swap; memory pressure usage will be degraded Jan 23 17:31:02.644309 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 23 17:31:02.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.720641 systemd-resolved[1627]: Positive Trust Anchors: Jan 23 17:31:02.720663 systemd-resolved[1627]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:31:02.720666 systemd-resolved[1627]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 23 17:31:02.720685 systemd-resolved[1627]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:31:02.774877 kernel: loop3: detected capacity change from 0 to 27544 Jan 23 17:31:02.802030 systemd-resolved[1627]: Using system hostname 'ci-4547.1.0-a-71c1b0067a'. Jan 23 17:31:02.804040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:31:02.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.809727 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:31:02.851057 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:31:02.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:02.856000 audit: BPF prog-id=8 op=UNLOAD Jan 23 17:31:02.856000 audit: BPF prog-id=7 op=UNLOAD Jan 23 17:31:02.856000 audit: BPF prog-id=28 op=LOAD Jan 23 17:31:02.856000 audit: BPF prog-id=29 op=LOAD Jan 23 17:31:02.858331 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:31:02.884445 systemd-udevd[1650]: Using default interface naming scheme 'v257'. Jan 23 17:31:03.079886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:31:03.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.089000 audit: BPF prog-id=30 op=LOAD Jan 23 17:31:03.091746 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:31:03.145695 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 17:31:03.173876 kernel: loop4: detected capacity change from 0 to 100192 Jan 23 17:31:03.185870 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 17:31:03.202876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 17:31:03.212196 systemd-networkd[1662]: lo: Link UP Jan 23 17:31:03.212208 systemd-networkd[1662]: lo: Gained carrier Jan 23 17:31:03.213718 systemd-networkd[1662]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:03.213726 systemd-networkd[1662]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:31:03.213992 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:31:03.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.227235 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:31:03.227639 systemd[1]: Reached target network.target - Network. Jan 23 17:31:03.233660 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:31:03.242389 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:31:03.266876 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 17:31:03.275605 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 17:31:03.275673 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 17:31:03.281072 kernel: Console: switching to colour dummy device 80x25 Jan 23 17:31:03.291790 kernel: mlx5_core a2c0:00:02.0 enP41664s1: Link up Jan 23 17:31:03.292248 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 17:31:03.316819 systemd-networkd[1662]: enP41664s1: Link UP Jan 23 17:31:03.316955 kernel: hv_netvsc 0022487a-0cdf-0022-487a-0cdf0022487a eth0: Data path switched to VF: enP41664s1 Jan 23 17:31:03.317318 systemd-networkd[1662]: eth0: Link UP Jan 23 17:31:03.317394 systemd-networkd[1662]: eth0: Gained carrier Jan 23 17:31:03.317461 systemd-networkd[1662]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:03.322377 systemd-networkd[1662]: enP41664s1: Gained carrier Jan 23 17:31:03.339153 systemd-networkd[1662]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 17:31:03.346263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:31:03.364601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:31:03.367993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:03.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.376460 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:31:03.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.388039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:31:03.399383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:31:03.399707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:03.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.408940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:31:03.454904 kernel: MACsec IEEE 802.1AE Jan 23 17:31:03.483394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 23 17:31:03.491009 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:31:03.558877 kernel: loop5: detected capacity change from 0 to 200800 Jan 23 17:31:03.569497 kernel: hv_vmbus: registering driver hv_balloon Jan 23 17:31:03.569600 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 17:31:03.572395 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 23 17:31:03.584901 kernel: loop6: detected capacity change from 0 to 45344 Jan 23 17:31:03.597874 kernel: loop7: detected capacity change from 0 to 27544 Jan 23 17:31:03.610886 kernel: loop1: detected capacity change from 0 to 100192 Jan 23 17:31:03.621998 (sd-merge)[1781]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 23 17:31:03.625683 (sd-merge)[1781]: Merged extensions into '/usr'. Jan 23 17:31:03.629931 systemd[1]: Reload requested from client PID 1600 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:31:03.629946 systemd[1]: Reloading... Jan 23 17:31:03.701977 zram_generator::config[1820]: No configuration found. Jan 23 17:31:03.882406 systemd[1]: Reloading finished in 252 ms. Jan 23 17:31:03.918787 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:31:03.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.925217 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:31:03.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.931644 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:31:03.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:03.945908 systemd[1]: Starting ensure-sysext.service... Jan 23 17:31:03.954742 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:31:03.960000 audit: BPF prog-id=31 op=LOAD Jan 23 17:31:03.960000 audit: BPF prog-id=21 op=UNLOAD Jan 23 17:31:03.960000 audit: BPF prog-id=32 op=LOAD Jan 23 17:31:03.960000 audit: BPF prog-id=30 op=UNLOAD Jan 23 17:31:03.960000 audit: BPF prog-id=33 op=LOAD Jan 23 17:31:03.960000 audit: BPF prog-id=22 op=UNLOAD Jan 23 17:31:03.961000 audit: BPF prog-id=34 op=LOAD Jan 23 17:31:03.961000 audit: BPF prog-id=35 op=LOAD Jan 23 17:31:03.961000 audit: BPF prog-id=23 op=UNLOAD Jan 23 17:31:03.961000 audit: BPF prog-id=24 op=UNLOAD Jan 23 17:31:03.961000 audit: BPF prog-id=36 op=LOAD Jan 23 17:31:03.961000 audit: BPF prog-id=37 op=LOAD Jan 23 17:31:03.961000 audit: BPF prog-id=28 op=UNLOAD Jan 23 17:31:03.961000 audit: BPF prog-id=29 op=UNLOAD Jan 23 17:31:03.961000 audit: BPF prog-id=38 op=LOAD Jan 23 17:31:03.961000 audit: BPF prog-id=15 op=UNLOAD Jan 23 17:31:03.961000 audit: BPF prog-id=39 op=LOAD Jan 23 17:31:03.961000 audit: BPF prog-id=40 op=LOAD Jan 23 17:31:03.961000 audit: BPF prog-id=16 op=UNLOAD Jan 23 17:31:03.961000 audit: BPF prog-id=17 op=UNLOAD Jan 23 17:31:03.962000 audit: BPF prog-id=41 op=LOAD Jan 23 17:31:03.962000 audit: BPF prog-id=18 op=UNLOAD Jan 23 17:31:03.962000 audit: BPF prog-id=42 op=LOAD Jan 23 17:31:03.962000 audit: BPF prog-id=43 op=LOAD Jan 23 17:31:03.962000 audit: BPF prog-id=19 op=UNLOAD Jan 23 17:31:03.962000 audit: BPF prog-id=20 op=UNLOAD Jan 23 17:31:03.962000 audit: BPF prog-id=44 op=LOAD Jan 23 17:31:03.962000 audit: BPF prog-id=25 op=UNLOAD Jan 23 17:31:03.962000 audit: BPF prog-id=45 op=LOAD Jan 23 17:31:03.962000 audit: BPF prog-id=46 op=LOAD Jan 23 17:31:03.962000 audit: BPF prog-id=26 op=UNLOAD Jan 23 17:31:03.962000 audit: BPF prog-id=27 op=UNLOAD Jan 23 17:31:03.967744 systemd-tmpfiles[1875]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:31:03.968113 systemd-tmpfiles[1875]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:31:03.968122 systemd[1]: Reload requested from client PID 1874 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:31:03.968133 systemd[1]: Reloading... Jan 23 17:31:03.968457 systemd-tmpfiles[1875]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:31:03.969205 systemd-tmpfiles[1875]: ACLs are not supported, ignoring. Jan 23 17:31:03.969337 systemd-tmpfiles[1875]: ACLs are not supported, ignoring. Jan 23 17:31:03.990494 systemd-tmpfiles[1875]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:31:03.990650 systemd-tmpfiles[1875]: Skipping /boot Jan 23 17:31:03.998420 systemd-tmpfiles[1875]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:31:03.998557 systemd-tmpfiles[1875]: Skipping /boot Jan 23 17:31:04.037915 zram_generator::config[1912]: No configuration found. Jan 23 17:31:04.196769 systemd[1]: Reloading finished in 228 ms. Jan 23 17:31:04.208306 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:31:04.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.214000 audit: BPF prog-id=47 op=LOAD Jan 23 17:31:04.214000 audit: BPF prog-id=31 op=UNLOAD Jan 23 17:31:04.214000 audit: BPF prog-id=48 op=LOAD Jan 23 17:31:04.214000 audit: BPF prog-id=44 op=UNLOAD Jan 23 17:31:04.214000 audit: BPF prog-id=49 op=LOAD Jan 23 17:31:04.215000 audit: BPF prog-id=50 op=LOAD Jan 23 17:31:04.215000 audit: BPF prog-id=45 op=UNLOAD Jan 23 17:31:04.215000 audit: BPF prog-id=46 op=UNLOAD Jan 23 17:31:04.215000 audit: BPF prog-id=51 op=LOAD Jan 23 17:31:04.215000 audit: BPF prog-id=52 op=LOAD Jan 23 17:31:04.215000 audit: BPF prog-id=36 op=UNLOAD Jan 23 17:31:04.215000 audit: BPF prog-id=37 op=UNLOAD Jan 23 17:31:04.215000 audit: BPF prog-id=53 op=LOAD Jan 23 17:31:04.215000 audit: BPF prog-id=32 op=UNLOAD Jan 23 17:31:04.216000 audit: BPF prog-id=54 op=LOAD Jan 23 17:31:04.216000 audit: BPF prog-id=38 op=UNLOAD Jan 23 17:31:04.216000 audit: BPF prog-id=55 op=LOAD Jan 23 17:31:04.216000 audit: BPF prog-id=56 op=LOAD Jan 23 17:31:04.216000 audit: BPF prog-id=39 op=UNLOAD Jan 23 17:31:04.216000 audit: BPF prog-id=40 op=UNLOAD Jan 23 17:31:04.216000 audit: BPF prog-id=57 op=LOAD Jan 23 17:31:04.217000 audit: BPF prog-id=33 op=UNLOAD Jan 23 17:31:04.217000 audit: BPF prog-id=58 op=LOAD Jan 23 17:31:04.217000 audit: BPF prog-id=59 op=LOAD Jan 23 17:31:04.217000 audit: BPF prog-id=34 op=UNLOAD Jan 23 17:31:04.217000 audit: BPF prog-id=35 op=UNLOAD Jan 23 17:31:04.217000 audit: BPF prog-id=60 op=LOAD Jan 23 17:31:04.217000 audit: BPF prog-id=41 op=UNLOAD Jan 23 17:31:04.217000 audit: BPF prog-id=61 op=LOAD Jan 23 17:31:04.217000 audit: BPF prog-id=62 op=LOAD Jan 23 17:31:04.217000 audit: BPF prog-id=42 op=UNLOAD Jan 23 17:31:04.217000 audit: BPF prog-id=43 op=UNLOAD Jan 23 17:31:04.235085 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:31:04.248832 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:31:04.258138 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:31:04.265059 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:31:04.275131 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:31:04.283133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:04.284569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:31:04.293618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:31:04.300053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:31:04.305096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:04.305284 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:04.305354 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:04.306255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:31:04.307892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:31:04.308000 audit[1971]: SYSTEM_BOOT pid=1971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.314703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:31:04.315111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:31:04.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.322341 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:31:04.322529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:31:04.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.340921 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:31:04.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.349079 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:04.351286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:31:04.364032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:31:04.373139 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:31:04.377276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:04.377449 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:04.377522 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:04.378345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:31:04.387051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:31:04.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.392559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:31:04.392748 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:31:04.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.398994 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:31:04.399177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:31:04.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.408603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:31:04.409993 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:31:04.423115 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:31:04.435742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:31:04.443089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:31:04.447245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:31:04.447408 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 23 17:31:04.447481 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:31:04.447606 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:31:04.453055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:31:04.453276 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:31:04.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.458282 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:31:04.458476 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:31:04.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.463174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:31:04.463349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:31:04.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.469609 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:31:04.469779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:31:04.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.478911 systemd[1]: Finished ensure-sysext.service. Jan 23 17:31:04.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.485596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:31:04.485672 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:31:04.588504 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:31:04.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.601058 systemd-networkd[1662]: eth0: Gained IPv6LL Jan 23 17:31:04.607897 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:31:04.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:04.613409 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:31:04.843000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 23 17:31:04.843000 audit[2015]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc5d01620 a2=420 a3=0 items=0 ppid=1966 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:04.843000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:31:04.845301 augenrules[2015]: No rules Jan 23 17:31:04.846033 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:31:04.846313 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:31:04.901925 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:31:04.907332 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:31:10.116746 ldconfig[1968]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:31:10.126676 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:31:10.134804 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:31:10.148837 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:31:10.153576 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:31:10.157939 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:31:10.163286 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:31:10.168461 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:31:10.172643 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:31:10.177573 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 23 17:31:10.182336 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 23 17:31:10.186465 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:31:10.191406 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:31:10.191445 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:31:10.195057 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:31:10.199725 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:31:10.205413 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:31:10.210989 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:31:10.216104 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:31:10.221077 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:31:10.227376 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:31:10.231825 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:31:10.237165 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:31:10.241556 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:31:10.245291 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:31:10.248922 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:31:10.248948 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:31:10.255890 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 17:31:10.278998 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:31:10.284363 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:31:10.295022 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:31:10.302442 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:31:10.309694 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:31:10.318231 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:31:10.319995 chronyd[2027]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 17:31:10.323023 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:31:10.325095 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 17:31:10.331302 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 17:31:10.333801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:10.339585 KVP[2037]: KVP starting; pid is:2037 Jan 23 17:31:10.342064 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:31:10.349079 kernel: hv_utils: KVP IC version 4.0 Jan 23 17:31:10.348898 KVP[2037]: KVP LIC Version: 3.1 Jan 23 17:31:10.350517 jq[2035]: false Jan 23 17:31:10.352713 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:31:10.362614 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:31:10.369993 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:31:10.377880 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:31:10.378239 extend-filesystems[2036]: Found /dev/sda6 Jan 23 17:31:10.384490 chronyd[2027]: Timezone right/UTC failed leap second check, ignoring Jan 23 17:31:10.389234 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:31:10.384680 chronyd[2027]: Loaded seccomp filter (level 2) Jan 23 17:31:10.396914 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:31:10.397950 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:31:10.400128 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:31:10.404548 extend-filesystems[2036]: Found /dev/sda9 Jan 23 17:31:10.417093 extend-filesystems[2036]: Checking size of /dev/sda9 Jan 23 17:31:10.411165 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:31:10.419117 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 17:31:10.435891 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:31:10.444440 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:31:10.444695 extend-filesystems[2036]: Resized partition /dev/sda9 Jan 23 17:31:10.456618 jq[2058]: true Jan 23 17:31:10.450529 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:31:10.454527 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:31:10.454918 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:31:10.463766 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:31:10.468996 extend-filesystems[2077]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:31:10.488416 update_engine[2054]: I20260123 17:31:10.481534 2054 main.cc:92] Flatcar Update Engine starting Jan 23 17:31:10.475540 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:31:10.475736 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:31:10.493994 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 23 17:31:10.494078 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 23 17:31:10.506414 jq[2083]: true Jan 23 17:31:10.529780 extend-filesystems[2077]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 17:31:10.529780 extend-filesystems[2077]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 23 17:31:10.529780 extend-filesystems[2077]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 23 17:31:10.585287 extend-filesystems[2036]: Resized filesystem in /dev/sda9 Jan 23 17:31:10.536314 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:31:10.538903 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:31:10.547346 systemd-logind[2051]: New seat seat0. Jan 23 17:31:10.549599 systemd-logind[2051]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 23 17:31:10.555061 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:31:10.618971 dbus-daemon[2030]: [system] SELinux support is enabled Jan 23 17:31:10.619258 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:31:10.626632 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:31:10.626659 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:31:10.633232 update_engine[2054]: I20260123 17:31:10.630795 2054 update_check_scheduler.cc:74] Next update check in 9m22s Jan 23 17:31:10.634874 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:31:10.634898 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:31:10.641345 bash[2114]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:31:10.643895 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:31:10.651068 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 17:31:10.653740 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:31:10.654101 dbus-daemon[2030]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 17:31:10.655002 tar[2080]: linux-arm64/LICENSE Jan 23 17:31:10.655002 tar[2080]: linux-arm64/helm Jan 23 17:31:10.686136 coreos-metadata[2029]: Jan 23 17:31:10.684 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 17:31:10.687607 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:31:10.691947 coreos-metadata[2029]: Jan 23 17:31:10.691 INFO Fetch successful Jan 23 17:31:10.692163 coreos-metadata[2029]: Jan 23 17:31:10.692 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 17:31:10.700874 coreos-metadata[2029]: Jan 23 17:31:10.699 INFO Fetch successful Jan 23 17:31:10.700874 coreos-metadata[2029]: Jan 23 17:31:10.699 INFO Fetching http://168.63.129.16/machine/5c0d1b5b-58f7-4a72-a6f2-ba3f3feee70b/0f4722c0%2D28d2%2D4a4b%2Db56a%2Dc1ca5b78a865.%5Fci%2D4547.1.0%2Da%2D71c1b0067a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 17:31:10.701788 coreos-metadata[2029]: Jan 23 17:31:10.701 INFO Fetch successful Jan 23 17:31:10.702227 coreos-metadata[2029]: Jan 23 17:31:10.702 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 17:31:10.715991 coreos-metadata[2029]: Jan 23 17:31:10.715 INFO Fetch successful Jan 23 17:31:10.824290 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:31:10.835455 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:31:11.070547 locksmithd[2150]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:31:11.100724 tar[2080]: linux-arm64/README.md Jan 23 17:31:11.116772 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:31:11.153698 sshd_keygen[2073]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:31:11.171341 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:31:11.180277 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:31:11.191048 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 17:31:11.198581 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:31:11.200905 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:31:11.210938 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:31:11.233468 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:31:11.240182 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 17:31:11.248144 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:31:11.255938 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 17:31:11.263375 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:31:11.356438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:11.361695 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:11.421417 containerd[2084]: time="2026-01-23T17:31:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:31:11.422436 containerd[2084]: time="2026-01-23T17:31:11.422222336Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 23 17:31:11.429512 containerd[2084]: time="2026-01-23T17:31:11.429474032Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.432µs" Jan 23 17:31:11.429614 containerd[2084]: time="2026-01-23T17:31:11.429597824Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:31:11.429714 containerd[2084]: time="2026-01-23T17:31:11.429702640Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:31:11.429756 containerd[2084]: time="2026-01-23T17:31:11.429745152Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:31:11.429962 containerd[2084]: time="2026-01-23T17:31:11.429942552Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:31:11.430024 containerd[2084]: time="2026-01-23T17:31:11.430011792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:31:11.430137 containerd[2084]: time="2026-01-23T17:31:11.430120216Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:31:11.430196 containerd[2084]: time="2026-01-23T17:31:11.430186088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:31:11.430441 containerd[2084]: time="2026-01-23T17:31:11.430418400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:31:11.430507 containerd[2084]: time="2026-01-23T17:31:11.430493808Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:31:11.430555 containerd[2084]: time="2026-01-23T17:31:11.430541168Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:31:11.430601 containerd[2084]: time="2026-01-23T17:31:11.430588920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 17:31:11.430809 containerd[2084]: time="2026-01-23T17:31:11.430786872Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 23 17:31:11.430890 containerd[2084]: time="2026-01-23T17:31:11.430875752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:31:11.431016 containerd[2084]: time="2026-01-23T17:31:11.431001192Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:31:11.431258 containerd[2084]: time="2026-01-23T17:31:11.431234640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:31:11.431349 containerd[2084]: time="2026-01-23T17:31:11.431336312Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:31:11.431395 containerd[2084]: time="2026-01-23T17:31:11.431385560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:31:11.431459 containerd[2084]: time="2026-01-23T17:31:11.431448720Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:31:11.431683 containerd[2084]: time="2026-01-23T17:31:11.431662056Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:31:11.431797 containerd[2084]: time="2026-01-23T17:31:11.431783368Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:31:11.451197 containerd[2084]: time="2026-01-23T17:31:11.451153000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:31:11.451372 containerd[2084]: time="2026-01-23T17:31:11.451362288Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620311392Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620358680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620378024Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620388176Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620396288Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620403016Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620411432Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620419712Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620427440Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620435520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620441992Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620450960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:31:11.621138 containerd[2084]: time="2026-01-23T17:31:11.620620040Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620635088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620646040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620652992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620659616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620666288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620676080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620684384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620691144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620698512Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620704640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620736472Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620800976Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620810872Z" level=info msg="Start snapshots syncer" Jan 23 17:31:11.621389 containerd[2084]: time="2026-01-23T17:31:11.620833592Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:31:11.622592 containerd[2084]: time="2026-01-23T17:31:11.622463952Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:31:11.623100 containerd[2084]: time="2026-01-23T17:31:11.622914352Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:31:11.623335 containerd[2084]: time="2026-01-23T17:31:11.623233056Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623764176Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623793296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623802296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623809280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623818384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623825200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623832432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623839504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623864192Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623895968Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623906728Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623912088Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623917704Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:31:11.624185 containerd[2084]: time="2026-01-23T17:31:11.623924160Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:31:11.624540 containerd[2084]: time="2026-01-23T17:31:11.623931384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:31:11.624540 containerd[2084]: time="2026-01-23T17:31:11.623939128Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:31:11.624540 containerd[2084]: time="2026-01-23T17:31:11.623952592Z" level=info msg="runtime interface created" Jan 23 17:31:11.624540 containerd[2084]: time="2026-01-23T17:31:11.623955848Z" level=info msg="created NRI interface" Jan 23 17:31:11.624540 containerd[2084]: time="2026-01-23T17:31:11.623960816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:31:11.624540 containerd[2084]: time="2026-01-23T17:31:11.623972128Z" level=info msg="Connect containerd service" Jan 23 17:31:11.624540 containerd[2084]: time="2026-01-23T17:31:11.623989720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:31:11.624983 containerd[2084]: time="2026-01-23T17:31:11.624823824Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:31:11.666466 kubelet[2232]: E0123 17:31:11.666413 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:11.668604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:11.668871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:11.669463 systemd[1]: kubelet.service: Consumed 519ms CPU time, 248.5M memory peak. Jan 23 17:31:11.980281 containerd[2084]: time="2026-01-23T17:31:11.980159944Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:31:11.980281 containerd[2084]: time="2026-01-23T17:31:11.980222904Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:31:11.980281 containerd[2084]: time="2026-01-23T17:31:11.980248288Z" level=info msg="Start subscribing containerd event" Jan 23 17:31:11.980281 containerd[2084]: time="2026-01-23T17:31:11.980280456Z" level=info msg="Start recovering state" Jan 23 17:31:11.980425 containerd[2084]: time="2026-01-23T17:31:11.980352400Z" level=info msg="Start event monitor" Jan 23 17:31:11.980425 containerd[2084]: time="2026-01-23T17:31:11.980361416Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:31:11.980425 containerd[2084]: time="2026-01-23T17:31:11.980366136Z" level=info msg="Start streaming server" Jan 23 17:31:11.980425 containerd[2084]: time="2026-01-23T17:31:11.980373136Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:31:11.980425 containerd[2084]: time="2026-01-23T17:31:11.980377776Z" level=info msg="runtime interface starting up..." Jan 23 17:31:11.980425 containerd[2084]: time="2026-01-23T17:31:11.980381584Z" level=info msg="starting plugins..." Jan 23 17:31:11.980425 containerd[2084]: time="2026-01-23T17:31:11.980391816Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:31:11.980518 containerd[2084]: time="2026-01-23T17:31:11.980496648Z" level=info msg="containerd successfully booted in 0.560614s" Jan 23 17:31:11.980723 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:31:11.985889 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:31:11.990987 systemd[1]: Startup finished in 3.462s (kernel) + 14.745s (initrd) + 14.685s (userspace) = 32.893s. Jan 23 17:31:12.677018 login[2225]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:12.678208 login[2226]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:12.688021 systemd-logind[2051]: New session 2 of user core. Jan 23 17:31:12.690294 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:31:12.691746 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:31:12.695104 systemd-logind[2051]: New session 1 of user core. Jan 23 17:31:12.716881 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:31:12.718969 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:31:12.728068 (systemd)[2262]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:12.730808 systemd-logind[2051]: New session 3 of user core. Jan 23 17:31:12.839992 systemd[2262]: Queued start job for default target default.target. Jan 23 17:31:12.845137 systemd[2262]: Created slice app.slice - User Application Slice. Jan 23 17:31:12.845302 systemd[2262]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 23 17:31:12.845369 systemd[2262]: Reached target paths.target - Paths. Jan 23 17:31:12.845587 systemd[2262]: Reached target timers.target - Timers. Jan 23 17:31:12.846818 systemd[2262]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:31:12.849008 systemd[2262]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 23 17:31:12.856060 systemd[2262]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:31:12.856194 systemd[2262]: Reached target sockets.target - Sockets. Jan 23 17:31:12.858781 systemd[2262]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 23 17:31:12.859129 systemd[2262]: Reached target basic.target - Basic System. Jan 23 17:31:12.859270 systemd[2262]: Reached target default.target - Main User Target. Jan 23 17:31:12.859474 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:31:12.859587 systemd[2262]: Startup finished in 123ms. Jan 23 17:31:12.866280 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:31:12.867019 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:31:12.943934 waagent[2224]: 2026-01-23T17:31:12.943785Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 17:31:12.949059 waagent[2224]: 2026-01-23T17:31:12.949004Z INFO Daemon Daemon OS: flatcar 4547.1.0 Jan 23 17:31:12.953630 waagent[2224]: 2026-01-23T17:31:12.952712Z INFO Daemon Daemon Python: 3.11.13 Jan 23 17:31:12.958851 waagent[2224]: 2026-01-23T17:31:12.957098Z INFO Daemon Daemon Run daemon Jan 23 17:31:12.960535 waagent[2224]: 2026-01-23T17:31:12.960490Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.1.0' Jan 23 17:31:12.967604 waagent[2224]: 2026-01-23T17:31:12.967553Z INFO Daemon Daemon Using waagent for provisioning Jan 23 17:31:12.971878 waagent[2224]: 2026-01-23T17:31:12.971810Z INFO Daemon Daemon Activate resource disk Jan 23 17:31:12.976071 waagent[2224]: 2026-01-23T17:31:12.976027Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 17:31:12.985254 waagent[2224]: 2026-01-23T17:31:12.985204Z INFO Daemon Daemon Found device: None Jan 23 17:31:12.988726 waagent[2224]: 2026-01-23T17:31:12.988687Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 17:31:12.994837 waagent[2224]: 2026-01-23T17:31:12.994805Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 17:31:13.003465 waagent[2224]: 2026-01-23T17:31:13.003425Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 17:31:13.007587 waagent[2224]: 2026-01-23T17:31:13.007556Z INFO Daemon Daemon Running default provisioning handler Jan 23 17:31:13.017703 waagent[2224]: 2026-01-23T17:31:13.017654Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 17:31:13.028154 waagent[2224]: 2026-01-23T17:31:13.028108Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 17:31:13.034876 waagent[2224]: 2026-01-23T17:31:13.034831Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 17:31:13.038317 waagent[2224]: 2026-01-23T17:31:13.038288Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 17:31:13.135878 waagent[2224]: 2026-01-23T17:31:13.134957Z INFO Daemon Daemon Successfully mounted dvd Jan 23 17:31:13.161550 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 17:31:13.167453 waagent[2224]: 2026-01-23T17:31:13.163825Z INFO Daemon Daemon Detect protocol endpoint Jan 23 17:31:13.167769 waagent[2224]: 2026-01-23T17:31:13.167726Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 17:31:13.172198 waagent[2224]: 2026-01-23T17:31:13.172152Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 17:31:13.177299 waagent[2224]: 2026-01-23T17:31:13.177254Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 17:31:13.181330 waagent[2224]: 2026-01-23T17:31:13.181283Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 17:31:13.185318 waagent[2224]: 2026-01-23T17:31:13.185278Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 17:31:13.198141 waagent[2224]: 2026-01-23T17:31:13.198027Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 17:31:13.203754 waagent[2224]: 2026-01-23T17:31:13.203720Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 17:31:13.208087 waagent[2224]: 2026-01-23T17:31:13.208037Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 17:31:13.384091 waagent[2224]: 2026-01-23T17:31:13.384011Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 17:31:13.389299 waagent[2224]: 2026-01-23T17:31:13.389234Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 17:31:13.397101 waagent[2224]: 2026-01-23T17:31:13.397053Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 17:31:13.415508 waagent[2224]: 2026-01-23T17:31:13.415459Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 23 17:31:13.421132 waagent[2224]: 2026-01-23T17:31:13.421086Z INFO Daemon Jan 23 17:31:13.423458 waagent[2224]: 2026-01-23T17:31:13.423415Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 87c51516-3bba-4d4a-9cd8-9429d41504b1 eTag: 9986364454897202243 source: Fabric] Jan 23 17:31:13.433084 waagent[2224]: 2026-01-23T17:31:13.433038Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 17:31:13.438591 waagent[2224]: 2026-01-23T17:31:13.438550Z INFO Daemon Jan 23 17:31:13.440936 waagent[2224]: 2026-01-23T17:31:13.440896Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 17:31:13.450353 waagent[2224]: 2026-01-23T17:31:13.450270Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 17:31:13.584275 waagent[2224]: 2026-01-23T17:31:13.584202Z INFO Daemon Downloaded certificate {'thumbprint': 'C7F04735D9384968A49E39E4BBE5448748E32C69', 'hasPrivateKey': True} Jan 23 17:31:13.591950 waagent[2224]: 2026-01-23T17:31:13.591891Z INFO Daemon Fetch goal state completed Jan 23 17:31:13.629376 waagent[2224]: 2026-01-23T17:31:13.629330Z INFO Daemon Daemon Starting provisioning Jan 23 17:31:13.633606 waagent[2224]: 2026-01-23T17:31:13.633539Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 17:31:13.637409 waagent[2224]: 2026-01-23T17:31:13.637359Z INFO Daemon Daemon Set hostname [ci-4547.1.0-a-71c1b0067a] Jan 23 17:31:13.644622 waagent[2224]: 2026-01-23T17:31:13.644578Z INFO Daemon Daemon Publish hostname [ci-4547.1.0-a-71c1b0067a] Jan 23 17:31:13.649423 waagent[2224]: 2026-01-23T17:31:13.649372Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 17:31:13.654131 waagent[2224]: 2026-01-23T17:31:13.654087Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 17:31:13.664530 systemd-networkd[1662]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 23 17:31:13.664762 systemd-networkd[1662]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:31:13.664926 systemd-networkd[1662]: eth0: DHCP lease lost Jan 23 17:31:13.676381 waagent[2224]: 2026-01-23T17:31:13.676315Z INFO Daemon Daemon Create user account if not exists Jan 23 17:31:13.681230 waagent[2224]: 2026-01-23T17:31:13.681177Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 17:31:13.685712 waagent[2224]: 2026-01-23T17:31:13.685662Z INFO Daemon Daemon Configure sudoer Jan 23 17:31:13.689917 systemd-networkd[1662]: eth0: DHCPv4 address 10.200.20.22/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 23 17:31:13.694908 waagent[2224]: 2026-01-23T17:31:13.694819Z INFO Daemon Daemon Configure sshd Jan 23 17:31:13.701221 waagent[2224]: 2026-01-23T17:31:13.701126Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 17:31:13.710960 waagent[2224]: 2026-01-23T17:31:13.710907Z INFO Daemon Daemon Deploy ssh public key. Jan 23 17:31:15.105055 waagent[2224]: 2026-01-23T17:31:15.104984Z INFO Daemon Daemon Provisioning complete Jan 23 17:31:15.118730 waagent[2224]: 2026-01-23T17:31:15.118693Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 17:31:15.123606 waagent[2224]: 2026-01-23T17:31:15.123573Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 17:31:15.131257 waagent[2224]: 2026-01-23T17:31:15.131226Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 17:31:15.232865 waagent[2315]: 2026-01-23T17:31:15.232781Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 17:31:15.233210 waagent[2315]: 2026-01-23T17:31:15.232943Z INFO ExtHandler ExtHandler OS: flatcar 4547.1.0 Jan 23 17:31:15.233210 waagent[2315]: 2026-01-23T17:31:15.232982Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 17:31:15.233210 waagent[2315]: 2026-01-23T17:31:15.233019Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 23 17:31:15.267805 waagent[2315]: 2026-01-23T17:31:15.267726Z INFO ExtHandler ExtHandler Distro: flatcar-4547.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 17:31:15.267974 waagent[2315]: 2026-01-23T17:31:15.267946Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 17:31:15.268019 waagent[2315]: 2026-01-23T17:31:15.268000Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 17:31:15.274261 waagent[2315]: 2026-01-23T17:31:15.274213Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 17:31:15.279238 waagent[2315]: 2026-01-23T17:31:15.279204Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 23 17:31:15.279637 waagent[2315]: 2026-01-23T17:31:15.279605Z INFO ExtHandler Jan 23 17:31:15.279689 waagent[2315]: 2026-01-23T17:31:15.279671Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: fae70a98-2786-4eb2-9df8-6c322b203a63 eTag: 9986364454897202243 source: Fabric] Jan 23 17:31:15.279954 waagent[2315]: 2026-01-23T17:31:15.279926Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 17:31:15.280367 waagent[2315]: 2026-01-23T17:31:15.280338Z INFO ExtHandler Jan 23 17:31:15.280407 waagent[2315]: 2026-01-23T17:31:15.280391Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 17:31:15.285451 waagent[2315]: 2026-01-23T17:31:15.285422Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 17:31:15.340485 waagent[2315]: 2026-01-23T17:31:15.340411Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C7F04735D9384968A49E39E4BBE5448748E32C69', 'hasPrivateKey': True} Jan 23 17:31:15.340899 waagent[2315]: 2026-01-23T17:31:15.340840Z INFO ExtHandler Fetch goal state completed Jan 23 17:31:15.353859 waagent[2315]: 2026-01-23T17:31:15.353791Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.5-dev (Library: OpenSSL 3.5.5-dev ) Jan 23 17:31:15.357464 waagent[2315]: 2026-01-23T17:31:15.357374Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2315 Jan 23 17:31:15.357525 waagent[2315]: 2026-01-23T17:31:15.357503Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 17:31:15.357767 waagent[2315]: 2026-01-23T17:31:15.357735Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 17:31:15.358851 waagent[2315]: 2026-01-23T17:31:15.358813Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 17:31:15.359206 waagent[2315]: 2026-01-23T17:31:15.359175Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 17:31:15.359329 waagent[2315]: 2026-01-23T17:31:15.359304Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 17:31:15.359742 waagent[2315]: 2026-01-23T17:31:15.359712Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 17:31:15.421251 waagent[2315]: 2026-01-23T17:31:15.420631Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 17:31:15.421251 waagent[2315]: 2026-01-23T17:31:15.420815Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 17:31:15.426079 waagent[2315]: 2026-01-23T17:31:15.426046Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 17:31:15.431193 systemd[1]: Reload requested from client PID 2330 ('systemctl') (unit waagent.service)... Jan 23 17:31:15.431211 systemd[1]: Reloading... Jan 23 17:31:15.514905 zram_generator::config[2375]: No configuration found. Jan 23 17:31:15.662137 systemd[1]: Reloading finished in 230 ms. Jan 23 17:31:15.685191 waagent[2315]: 2026-01-23T17:31:15.683032Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 17:31:15.685191 waagent[2315]: 2026-01-23T17:31:15.683176Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 17:31:16.832861 waagent[2315]: 2026-01-23T17:31:16.832252Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 17:31:16.832861 waagent[2315]: 2026-01-23T17:31:16.832567Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 17:31:16.833234 waagent[2315]: 2026-01-23T17:31:16.833193Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 17:31:16.833626 waagent[2315]: 2026-01-23T17:31:16.833570Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 17:31:16.833672 waagent[2315]: 2026-01-23T17:31:16.833633Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 17:31:16.833708 waagent[2315]: 2026-01-23T17:31:16.833679Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 17:31:16.833981 waagent[2315]: 2026-01-23T17:31:16.833947Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 17:31:16.834263 waagent[2315]: 2026-01-23T17:31:16.834238Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 17:31:16.834342 waagent[2315]: 2026-01-23T17:31:16.834186Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 17:31:16.834434 waagent[2315]: 2026-01-23T17:31:16.834374Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 17:31:16.834612 waagent[2315]: 2026-01-23T17:31:16.834569Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 17:31:16.834612 waagent[2315]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 17:31:16.834612 waagent[2315]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 17:31:16.834612 waagent[2315]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 17:31:16.834612 waagent[2315]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 17:31:16.834612 waagent[2315]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 17:31:16.834612 waagent[2315]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 17:31:16.834916 waagent[2315]: 2026-01-23T17:31:16.834758Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 17:31:16.834996 waagent[2315]: 2026-01-23T17:31:16.834971Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 17:31:16.835114 waagent[2315]: 2026-01-23T17:31:16.835072Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 17:31:16.835328 waagent[2315]: 2026-01-23T17:31:16.835262Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 17:31:16.835601 waagent[2315]: 2026-01-23T17:31:16.835560Z INFO EnvHandler ExtHandler Configure routes Jan 23 17:31:16.836014 waagent[2315]: 2026-01-23T17:31:16.835979Z INFO EnvHandler ExtHandler Gateway:None Jan 23 17:31:16.836469 waagent[2315]: 2026-01-23T17:31:16.836352Z INFO EnvHandler ExtHandler Routes:None Jan 23 17:31:16.845753 waagent[2315]: 2026-01-23T17:31:16.845702Z INFO ExtHandler ExtHandler Jan 23 17:31:16.846010 waagent[2315]: 2026-01-23T17:31:16.845971Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c437ad2a-e8d6-4c44-9736-1eb59e6dec0c correlation f48e16ea-b654-413b-bec9-89289907a1d6 created: 2026-01-23T17:30:16.604073Z] Jan 23 17:31:16.846422 waagent[2315]: 2026-01-23T17:31:16.846379Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 17:31:16.846981 waagent[2315]: 2026-01-23T17:31:16.846943Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 23 17:31:16.870010 waagent[2315]: 2026-01-23T17:31:16.869964Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 17:31:16.870010 waagent[2315]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 17:31:16.870508 waagent[2315]: 2026-01-23T17:31:16.870479Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7F36195D-C9F4-4AEB-B381-7DC45C678AF2;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 17:31:16.886088 waagent[2315]: 2026-01-23T17:31:16.886018Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 17:31:16.886088 waagent[2315]: Executing ['ip', '-a', '-o', 'link']: Jan 23 17:31:16.886088 waagent[2315]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 17:31:16.886088 waagent[2315]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:0c:df brd ff:ff:ff:ff:ff:ff\ altname enx0022487a0cdf Jan 23 17:31:16.886088 waagent[2315]: 3: enP41664s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7a:0c:df brd ff:ff:ff:ff:ff:ff\ altname enP41664p0s2 Jan 23 17:31:16.886088 waagent[2315]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 17:31:16.886088 waagent[2315]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 17:31:16.886088 waagent[2315]: 2: eth0 inet 10.200.20.22/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 17:31:16.886088 waagent[2315]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 17:31:16.886088 waagent[2315]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 17:31:16.886088 waagent[2315]: 2: eth0 inet6 fe80::222:48ff:fe7a:cdf/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 17:31:16.941627 waagent[2315]: 2026-01-23T17:31:16.941552Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 17:31:16.941627 waagent[2315]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:31:16.941627 waagent[2315]: pkts bytes target prot opt in out source destination Jan 23 17:31:16.941627 waagent[2315]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:31:16.941627 waagent[2315]: pkts bytes target prot opt in out source destination Jan 23 17:31:16.941627 waagent[2315]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:31:16.941627 waagent[2315]: pkts bytes target prot opt in out source destination Jan 23 17:31:16.941627 waagent[2315]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 17:31:16.941627 waagent[2315]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 17:31:16.941627 waagent[2315]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 17:31:16.944025 waagent[2315]: 2026-01-23T17:31:16.943975Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 17:31:16.944025 waagent[2315]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:31:16.944025 waagent[2315]: pkts bytes target prot opt in out source destination Jan 23 17:31:16.944025 waagent[2315]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:31:16.944025 waagent[2315]: pkts bytes target prot opt in out source destination Jan 23 17:31:16.944025 waagent[2315]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 17:31:16.944025 waagent[2315]: pkts bytes target prot opt in out source destination Jan 23 17:31:16.944025 waagent[2315]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 17:31:16.944025 waagent[2315]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 17:31:16.944025 waagent[2315]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 17:31:16.944231 waagent[2315]: 2026-01-23T17:31:16.944205Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 23 17:31:21.919674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:31:21.921134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:22.037539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:22.048108 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:22.137832 kubelet[2467]: E0123 17:31:22.137769 2467 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:22.140669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:22.140790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:22.141358 systemd[1]: kubelet.service: Consumed 120ms CPU time, 106.5M memory peak. Jan 23 17:31:32.391481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:31:32.393375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:32.492306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:32.500295 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:32.588813 kubelet[2481]: E0123 17:31:32.588756 2481 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:32.591141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:32.591257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:32.592927 systemd[1]: kubelet.service: Consumed 113ms CPU time, 106.7M memory peak. Jan 23 17:31:32.983887 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:31:32.985098 systemd[1]: Started sshd@0-10.200.20.22:22-10.200.16.10:47482.service - OpenSSH per-connection server daemon (10.200.16.10:47482). Jan 23 17:31:33.584894 sshd[2489]: Accepted publickey for core from 10.200.16.10 port 47482 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:33.586103 sshd-session[2489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:33.591160 systemd-logind[2051]: New session 4 of user core. Jan 23 17:31:33.598094 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:31:33.907203 systemd[1]: Started sshd@1-10.200.20.22:22-10.200.16.10:47488.service - OpenSSH per-connection server daemon (10.200.16.10:47488). Jan 23 17:31:34.178191 chronyd[2027]: Selected source PHC0 Jan 23 17:31:34.326157 sshd[2496]: Accepted publickey for core from 10.200.16.10 port 47488 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:34.327372 sshd-session[2496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:34.331439 systemd-logind[2051]: New session 5 of user core. Jan 23 17:31:34.346330 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:31:34.562513 sshd[2500]: Connection closed by 10.200.16.10 port 47488 Jan 23 17:31:34.563265 sshd-session[2496]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:34.568176 systemd[1]: sshd@1-10.200.20.22:22-10.200.16.10:47488.service: Deactivated successfully. Jan 23 17:31:34.569874 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:31:34.570557 systemd-logind[2051]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:31:34.571764 systemd-logind[2051]: Removed session 5. Jan 23 17:31:34.649178 systemd[1]: Started sshd@2-10.200.20.22:22-10.200.16.10:47492.service - OpenSSH per-connection server daemon (10.200.16.10:47492). Jan 23 17:31:35.043324 sshd[2506]: Accepted publickey for core from 10.200.16.10 port 47492 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:35.044411 sshd-session[2506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:35.048895 systemd-logind[2051]: New session 6 of user core. Jan 23 17:31:35.059292 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:31:35.258028 sshd[2510]: Connection closed by 10.200.16.10 port 47492 Jan 23 17:31:35.258575 sshd-session[2506]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:35.262890 systemd[1]: sshd@2-10.200.20.22:22-10.200.16.10:47492.service: Deactivated successfully. Jan 23 17:31:35.264398 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:31:35.265117 systemd-logind[2051]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:31:35.266298 systemd-logind[2051]: Removed session 6. Jan 23 17:31:35.356902 systemd[1]: Started sshd@3-10.200.20.22:22-10.200.16.10:47508.service - OpenSSH per-connection server daemon (10.200.16.10:47508). Jan 23 17:31:35.780067 sshd[2516]: Accepted publickey for core from 10.200.16.10 port 47508 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:35.781279 sshd-session[2516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:35.785681 systemd-logind[2051]: New session 7 of user core. Jan 23 17:31:35.796057 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:31:36.016060 sshd[2520]: Connection closed by 10.200.16.10 port 47508 Jan 23 17:31:36.016704 sshd-session[2516]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:36.020246 systemd[1]: sshd@3-10.200.20.22:22-10.200.16.10:47508.service: Deactivated successfully. Jan 23 17:31:36.021724 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:31:36.022403 systemd-logind[2051]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:31:36.023556 systemd-logind[2051]: Removed session 7. Jan 23 17:31:36.103878 systemd[1]: Started sshd@4-10.200.20.22:22-10.200.16.10:47522.service - OpenSSH per-connection server daemon (10.200.16.10:47522). Jan 23 17:31:36.525467 sshd[2526]: Accepted publickey for core from 10.200.16.10 port 47522 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:36.526643 sshd-session[2526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:36.530805 systemd-logind[2051]: New session 8 of user core. Jan 23 17:31:36.537030 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:31:36.786870 sudo[2531]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:31:36.787103 sudo[2531]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:31:36.833717 sudo[2531]: pam_unix(sudo:session): session closed for user root Jan 23 17:31:36.912462 sshd[2530]: Connection closed by 10.200.16.10 port 47522 Jan 23 17:31:36.911632 sshd-session[2526]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:36.915439 systemd[1]: sshd@4-10.200.20.22:22-10.200.16.10:47522.service: Deactivated successfully. Jan 23 17:31:36.917618 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:31:36.919158 systemd-logind[2051]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:31:36.920840 systemd-logind[2051]: Removed session 8. Jan 23 17:31:37.001254 systemd[1]: Started sshd@5-10.200.20.22:22-10.200.16.10:47536.service - OpenSSH per-connection server daemon (10.200.16.10:47536). Jan 23 17:31:37.421276 sshd[2538]: Accepted publickey for core from 10.200.16.10 port 47536 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:37.422477 sshd-session[2538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:37.426643 systemd-logind[2051]: New session 9 of user core. Jan 23 17:31:37.438116 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:31:37.580879 sudo[2544]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:31:37.581114 sudo[2544]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:31:37.585597 sudo[2544]: pam_unix(sudo:session): session closed for user root Jan 23 17:31:37.591354 sudo[2543]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:31:37.591565 sudo[2543]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:31:37.598181 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:31:37.638000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 17:31:37.639533 augenrules[2568]: No rules Jan 23 17:31:37.642209 kernel: kauditd_printk_skb: 161 callbacks suppressed Jan 23 17:31:37.642271 kernel: audit: type=1305 audit(1769189497.638:263): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 23 17:31:37.645502 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:31:37.645981 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:31:37.650716 sudo[2543]: pam_unix(sudo:session): session closed for user root Jan 23 17:31:37.638000 audit[2568]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe00cc070 a2=420 a3=0 items=0 ppid=2549 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:37.671216 kernel: audit: type=1300 audit(1769189497.638:263): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe00cc070 a2=420 a3=0 items=0 ppid=2549 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:37.671348 kernel: audit: type=1327 audit(1769189497.638:263): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:31:37.638000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 23 17:31:37.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.691138 kernel: audit: type=1130 audit(1769189497.646:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.703337 kernel: audit: type=1131 audit(1769189497.646:265): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.703386 kernel: audit: type=1106 audit(1769189497.649:266): pid=2543 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.649000 audit[2543]: USER_END pid=2543 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.650000 audit[2543]: CRED_DISP pid=2543 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.728277 kernel: audit: type=1104 audit(1769189497.650:267): pid=2543 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.729263 sshd[2542]: Connection closed by 10.200.16.10 port 47536 Jan 23 17:31:37.728777 sshd-session[2538]: pam_unix(sshd:session): session closed for user core Jan 23 17:31:37.729000 audit[2538]: USER_END pid=2538 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:37.729000 audit[2538]: CRED_DISP pid=2538 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:37.750632 systemd[1]: sshd@5-10.200.20.22:22-10.200.16.10:47536.service: Deactivated successfully. Jan 23 17:31:37.758690 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:31:37.759804 systemd-logind[2051]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:31:37.762731 systemd-logind[2051]: Removed session 9. Jan 23 17:31:37.766726 kernel: audit: type=1106 audit(1769189497.729:268): pid=2538 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:37.766795 kernel: audit: type=1104 audit(1769189497.729:269): pid=2538 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:37.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.22:22-10.200.16.10:47536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.781011 kernel: audit: type=1131 audit(1769189497.749:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.22:22-10.200.16.10:47536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.22:22-10.200.16.10:47550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:37.820548 systemd[1]: Started sshd@6-10.200.20.22:22-10.200.16.10:47550.service - OpenSSH per-connection server daemon (10.200.16.10:47550). Jan 23 17:31:38.240000 audit[2577]: USER_ACCT pid=2577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:38.241312 sshd[2577]: Accepted publickey for core from 10.200.16.10 port 47550 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:31:38.241000 audit[2577]: CRED_ACQ pid=2577 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:38.241000 audit[2577]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0104ef0 a2=3 a3=0 items=0 ppid=1 pid=2577 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:38.241000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:31:38.242877 sshd-session[2577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:31:38.248394 systemd-logind[2051]: New session 10 of user core. Jan 23 17:31:38.255065 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:31:38.257000 audit[2577]: USER_START pid=2577 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:38.258000 audit[2581]: CRED_ACQ pid=2581 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:31:38.400000 audit[2582]: USER_ACCT pid=2582 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:31:38.401219 sudo[2582]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:31:38.400000 audit[2582]: CRED_REFR pid=2582 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:31:38.400000 audit[2582]: USER_START pid=2582 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:31:38.401439 sudo[2582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:31:39.448718 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:31:39.462359 (dockerd)[2601]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:31:40.457580 dockerd[2601]: time="2026-01-23T17:31:40.457513418Z" level=info msg="Starting up" Jan 23 17:31:40.458374 dockerd[2601]: time="2026-01-23T17:31:40.458347490Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:31:40.467656 dockerd[2601]: time="2026-01-23T17:31:40.467570866Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:31:40.493940 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport252650087-merged.mount: Deactivated successfully. Jan 23 17:31:40.545201 dockerd[2601]: time="2026-01-23T17:31:40.544996810Z" level=info msg="Loading containers: start." Jan 23 17:31:40.572917 kernel: Initializing XFRM netlink socket Jan 23 17:31:40.625000 audit[2648]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.625000 audit[2648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcf0abfc0 a2=0 a3=0 items=0 ppid=2601 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.625000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 17:31:40.627000 audit[2650]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.627000 audit[2650]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffeaef6380 a2=0 a3=0 items=0 ppid=2601 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.627000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 17:31:40.629000 audit[2652]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.629000 audit[2652]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda99b710 a2=0 a3=0 items=0 ppid=2601 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.629000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 17:31:40.631000 audit[2654]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.631000 audit[2654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb72f150 a2=0 a3=0 items=0 ppid=2601 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 17:31:40.632000 audit[2656]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.632000 audit[2656]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcdd5bc30 a2=0 a3=0 items=0 ppid=2601 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 17:31:40.634000 audit[2658]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2658 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.634000 audit[2658]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcebd9490 a2=0 a3=0 items=0 ppid=2601 pid=2658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:31:40.636000 audit[2660]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.636000 audit[2660]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff05679e0 a2=0 a3=0 items=0 ppid=2601 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.636000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:31:40.638000 audit[2662]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2662 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.638000 audit[2662]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdac196c0 a2=0 a3=0 items=0 ppid=2601 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 17:31:40.677000 audit[2665]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.677000 audit[2665]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc955a520 a2=0 a3=0 items=0 ppid=2601 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.677000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 23 17:31:40.679000 audit[2667]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2667 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.679000 audit[2667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd3e25bc0 a2=0 a3=0 items=0 ppid=2601 pid=2667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 17:31:40.681000 audit[2669]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.681000 audit[2669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd7d043e0 a2=0 a3=0 items=0 ppid=2601 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 17:31:40.682000 audit[2671]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.682000 audit[2671]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffffada3eb0 a2=0 a3=0 items=0 ppid=2601 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:31:40.684000 audit[2673]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2673 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.684000 audit[2673]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd99ca350 a2=0 a3=0 items=0 ppid=2601 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 17:31:40.792000 audit[2703]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2703 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.792000 audit[2703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc1a981f0 a2=0 a3=0 items=0 ppid=2601 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.792000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 23 17:31:40.794000 audit[2705]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2705 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.794000 audit[2705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd6452a00 a2=0 a3=0 items=0 ppid=2601 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.794000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 23 17:31:40.796000 audit[2707]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2707 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.796000 audit[2707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff01b54f0 a2=0 a3=0 items=0 ppid=2601 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.796000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 23 17:31:40.797000 audit[2709]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2709 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.797000 audit[2709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8688e20 a2=0 a3=0 items=0 ppid=2601 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 23 17:31:40.799000 audit[2711]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2711 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.799000 audit[2711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffce1bfd00 a2=0 a3=0 items=0 ppid=2601 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 23 17:31:40.801000 audit[2713]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.801000 audit[2713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff2ccc400 a2=0 a3=0 items=0 ppid=2601 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:31:40.802000 audit[2715]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2715 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.802000 audit[2715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff31834c0 a2=0 a3=0 items=0 ppid=2601 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.802000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:31:40.804000 audit[2717]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.804000 audit[2717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd262f710 a2=0 a3=0 items=0 ppid=2601 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.804000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 23 17:31:40.806000 audit[2719]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2719 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.806000 audit[2719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffedcdf8b0 a2=0 a3=0 items=0 ppid=2601 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 23 17:31:40.808000 audit[2721]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2721 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.808000 audit[2721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc19344d0 a2=0 a3=0 items=0 ppid=2601 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 23 17:31:40.809000 audit[2723]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.809000 audit[2723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe5681190 a2=0 a3=0 items=0 ppid=2601 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 23 17:31:40.811000 audit[2725]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.811000 audit[2725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe10d4500 a2=0 a3=0 items=0 ppid=2601 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 23 17:31:40.813000 audit[2727]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.813000 audit[2727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc306a690 a2=0 a3=0 items=0 ppid=2601 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.813000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 23 17:31:40.817000 audit[2732]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.817000 audit[2732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe10dd510 a2=0 a3=0 items=0 ppid=2601 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 17:31:40.819000 audit[2734]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.819000 audit[2734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff0f32980 a2=0 a3=0 items=0 ppid=2601 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 17:31:40.821000 audit[2736]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.821000 audit[2736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce7b4630 a2=0 a3=0 items=0 ppid=2601 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.821000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 17:31:40.822000 audit[2738]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2738 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.822000 audit[2738]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffb145520 a2=0 a3=0 items=0 ppid=2601 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 23 17:31:40.824000 audit[2740]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.824000 audit[2740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe82d4050 a2=0 a3=0 items=0 ppid=2601 pid=2740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.824000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 23 17:31:40.826000 audit[2742]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:40.826000 audit[2742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe44b4210 a2=0 a3=0 items=0 ppid=2601 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 23 17:31:40.890000 audit[2747]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.890000 audit[2747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffe712f10 a2=0 a3=0 items=0 ppid=2601 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 23 17:31:40.892000 audit[2749]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2749 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.892000 audit[2749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd42226e0 a2=0 a3=0 items=0 ppid=2601 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.892000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 23 17:31:40.900000 audit[2757]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.900000 audit[2757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff1138520 a2=0 a3=0 items=0 ppid=2601 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.900000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 23 17:31:40.905000 audit[2762]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.905000 audit[2762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffded8f140 a2=0 a3=0 items=0 ppid=2601 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.905000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 23 17:31:40.908000 audit[2764]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.908000 audit[2764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffc153fcb0 a2=0 a3=0 items=0 ppid=2601 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 23 17:31:40.910000 audit[2766]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.910000 audit[2766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffcc82c00 a2=0 a3=0 items=0 ppid=2601 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.910000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 23 17:31:40.912000 audit[2768]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.912000 audit[2768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffca7e4330 a2=0 a3=0 items=0 ppid=2601 pid=2768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.912000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 23 17:31:40.914000 audit[2770]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:40.914000 audit[2770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdc4d6780 a2=0 a3=0 items=0 ppid=2601 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:40.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 23 17:31:40.915763 systemd-networkd[1662]: docker0: Link UP Jan 23 17:31:40.930582 dockerd[2601]: time="2026-01-23T17:31:40.930528458Z" level=info msg="Loading containers: done." Jan 23 17:31:40.972753 dockerd[2601]: time="2026-01-23T17:31:40.972321250Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:31:40.972753 dockerd[2601]: time="2026-01-23T17:31:40.972430626Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:31:40.972753 dockerd[2601]: time="2026-01-23T17:31:40.972553202Z" level=info msg="Initializing buildkit" Jan 23 17:31:41.026767 dockerd[2601]: time="2026-01-23T17:31:41.026718210Z" level=info msg="Completed buildkit initialization" Jan 23 17:31:41.029837 dockerd[2601]: time="2026-01-23T17:31:41.029790746Z" level=info msg="Daemon has completed initialization" Jan 23 17:31:41.030547 dockerd[2601]: time="2026-01-23T17:31:41.029990354Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:31:41.030123 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:31:41.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:41.867377 containerd[2084]: time="2026-01-23T17:31:41.867215898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 17:31:42.619606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:31:42.621952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:43.158801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:43.163869 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 23 17:31:43.164016 kernel: audit: type=1130 audit(1769189503.158:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:43.184504 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:43.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:31:43.213925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:43.226650 kubelet[2825]: E0123 17:31:43.211935 2825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:43.214038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:43.214680 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.3M memory peak. Jan 23 17:31:43.228885 kernel: audit: type=1131 audit(1769189503.213:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:31:43.334678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408686160.mount: Deactivated successfully. Jan 23 17:31:44.065891 containerd[2084]: time="2026-01-23T17:31:44.065786269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:44.068444 containerd[2084]: time="2026-01-23T17:31:44.068233531Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24396970" Jan 23 17:31:44.071661 containerd[2084]: time="2026-01-23T17:31:44.071629930Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:44.075855 containerd[2084]: time="2026-01-23T17:31:44.075785952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:44.076874 containerd[2084]: time="2026-01-23T17:31:44.076518230Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.209180092s" Jan 23 17:31:44.076874 containerd[2084]: time="2026-01-23T17:31:44.076557600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 17:31:44.077236 containerd[2084]: time="2026-01-23T17:31:44.077205866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 17:31:45.050395 containerd[2084]: time="2026-01-23T17:31:45.049719968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:45.052291 containerd[2084]: time="2026-01-23T17:31:45.052246626Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Jan 23 17:31:45.054944 containerd[2084]: time="2026-01-23T17:31:45.054911204Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:45.060604 containerd[2084]: time="2026-01-23T17:31:45.060561623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:45.061340 containerd[2084]: time="2026-01-23T17:31:45.061160374Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 983.922123ms" Jan 23 17:31:45.061451 containerd[2084]: time="2026-01-23T17:31:45.061437525Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 17:31:45.062571 containerd[2084]: time="2026-01-23T17:31:45.062547318Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 17:31:45.892856 containerd[2084]: time="2026-01-23T17:31:45.892781951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:45.896308 containerd[2084]: time="2026-01-23T17:31:45.896249170Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 23 17:31:45.899133 containerd[2084]: time="2026-01-23T17:31:45.899103166Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:45.903965 containerd[2084]: time="2026-01-23T17:31:45.903905181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:45.904468 containerd[2084]: time="2026-01-23T17:31:45.904332515Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 841.665695ms" Jan 23 17:31:45.904468 containerd[2084]: time="2026-01-23T17:31:45.904365861Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 17:31:45.904943 containerd[2084]: time="2026-01-23T17:31:45.904863743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 17:31:46.830797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3027590206.mount: Deactivated successfully. Jan 23 17:31:47.023302 containerd[2084]: time="2026-01-23T17:31:47.023235842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:47.025743 containerd[2084]: time="2026-01-23T17:31:47.025591644Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=9841285" Jan 23 17:31:47.028409 containerd[2084]: time="2026-01-23T17:31:47.028375787Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:47.031928 containerd[2084]: time="2026-01-23T17:31:47.031894641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:47.032455 containerd[2084]: time="2026-01-23T17:31:47.032224066Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.127334514s" Jan 23 17:31:47.032455 containerd[2084]: time="2026-01-23T17:31:47.032255780Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 17:31:47.032961 containerd[2084]: time="2026-01-23T17:31:47.032937095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 17:31:48.495170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2957252649.mount: Deactivated successfully. Jan 23 17:31:49.177562 containerd[2084]: time="2026-01-23T17:31:49.176871473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:49.179434 containerd[2084]: time="2026-01-23T17:31:49.179390099Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=2106" Jan 23 17:31:49.182131 containerd[2084]: time="2026-01-23T17:31:49.182104976Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:49.186687 containerd[2084]: time="2026-01-23T17:31:49.186653218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:49.188222 containerd[2084]: time="2026-01-23T17:31:49.188093428Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.155123716s" Jan 23 17:31:49.188222 containerd[2084]: time="2026-01-23T17:31:49.188130878Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 17:31:49.188615 containerd[2084]: time="2026-01-23T17:31:49.188589918Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 17:31:49.719692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618039592.mount: Deactivated successfully. Jan 23 17:31:49.736841 containerd[2084]: time="2026-01-23T17:31:49.736294671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:49.738752 containerd[2084]: time="2026-01-23T17:31:49.738709916Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 23 17:31:49.741501 containerd[2084]: time="2026-01-23T17:31:49.741478171Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:49.745858 containerd[2084]: time="2026-01-23T17:31:49.745821203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:49.746250 containerd[2084]: time="2026-01-23T17:31:49.746220951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 557.556813ms" Jan 23 17:31:49.746300 containerd[2084]: time="2026-01-23T17:31:49.746252113Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 17:31:49.747032 containerd[2084]: time="2026-01-23T17:31:49.747007720Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 17:31:50.361401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount377977077.mount: Deactivated successfully. Jan 23 17:31:51.691873 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 23 17:31:52.643668 containerd[2084]: time="2026-01-23T17:31:52.643610500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:52.646294 containerd[2084]: time="2026-01-23T17:31:52.646209532Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=96314798" Jan 23 17:31:52.649877 containerd[2084]: time="2026-01-23T17:31:52.649340611Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:52.653301 containerd[2084]: time="2026-01-23T17:31:52.653255284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:31:52.654078 containerd[2084]: time="2026-01-23T17:31:52.654046118Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.907008101s" Jan 23 17:31:52.654197 containerd[2084]: time="2026-01-23T17:31:52.654181972Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 17:31:53.242024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 17:31:53.243975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:53.352705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:53.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:53.366867 kernel: audit: type=1130 audit(1769189513.352:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:53.370111 (kubelet)[3042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:31:53.399098 kubelet[3042]: E0123 17:31:53.399031 3042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:31:53.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:31:53.403356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:31:53.403466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:31:53.403790 systemd[1]: kubelet.service: Consumed 111ms CPU time, 106.8M memory peak. Jan 23 17:31:53.419863 kernel: audit: type=1131 audit(1769189513.402:324): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:31:55.859926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:55.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:55.860428 systemd[1]: kubelet.service: Consumed 111ms CPU time, 106.8M memory peak. Jan 23 17:31:55.864060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:55.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:55.886113 kernel: audit: type=1130 audit(1769189515.859:325): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:55.886260 kernel: audit: type=1131 audit(1769189515.859:326): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:55.903729 systemd[1]: Reload requested from client PID 3057 ('systemctl') (unit session-10.scope)... Jan 23 17:31:55.903743 systemd[1]: Reloading... Jan 23 17:31:56.006876 zram_generator::config[3110]: No configuration found. Jan 23 17:31:56.115980 update_engine[2054]: I20260123 17:31:56.114897 2054 update_attempter.cc:509] Updating boot flags... Jan 23 17:31:56.172981 systemd[1]: Reloading finished in 268 ms. Jan 23 17:31:56.217906 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:31:56.217989 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:31:56.218683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:56.218837 systemd[1]: kubelet.service: Consumed 63ms CPU time, 75.1M memory peak. Jan 23 17:31:56.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:31:56.220777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:31:56.233000 audit: BPF prog-id=87 op=LOAD Jan 23 17:31:56.245496 kernel: audit: type=1130 audit(1769189516.217:327): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 23 17:31:56.245541 kernel: audit: type=1334 audit(1769189516.233:328): prog-id=87 op=LOAD Jan 23 17:31:56.238000 audit: BPF prog-id=67 op=UNLOAD Jan 23 17:31:56.253816 kernel: audit: type=1334 audit(1769189516.238:329): prog-id=67 op=UNLOAD Jan 23 17:31:56.238000 audit: BPF prog-id=88 op=LOAD Jan 23 17:31:56.259690 kernel: audit: type=1334 audit(1769189516.238:330): prog-id=88 op=LOAD Jan 23 17:31:56.266228 kernel: audit: type=1334 audit(1769189516.238:331): prog-id=89 op=LOAD Jan 23 17:31:56.238000 audit: BPF prog-id=89 op=LOAD Jan 23 17:31:56.238000 audit: BPF prog-id=68 op=UNLOAD Jan 23 17:31:56.270635 kernel: audit: type=1334 audit(1769189516.238:332): prog-id=68 op=UNLOAD Jan 23 17:31:56.238000 audit: BPF prog-id=69 op=UNLOAD Jan 23 17:31:56.239000 audit: BPF prog-id=90 op=LOAD Jan 23 17:31:56.239000 audit: BPF prog-id=82 op=UNLOAD Jan 23 17:31:56.239000 audit: BPF prog-id=91 op=LOAD Jan 23 17:31:56.240000 audit: BPF prog-id=92 op=LOAD Jan 23 17:31:56.240000 audit: BPF prog-id=83 op=UNLOAD Jan 23 17:31:56.240000 audit: BPF prog-id=84 op=UNLOAD Jan 23 17:31:56.240000 audit: BPF prog-id=93 op=LOAD Jan 23 17:31:56.240000 audit: BPF prog-id=78 op=UNLOAD Jan 23 17:31:56.240000 audit: BPF prog-id=94 op=LOAD Jan 23 17:31:56.240000 audit: BPF prog-id=95 op=LOAD Jan 23 17:31:56.240000 audit: BPF prog-id=79 op=UNLOAD Jan 23 17:31:56.240000 audit: BPF prog-id=80 op=UNLOAD Jan 23 17:31:56.240000 audit: BPF prog-id=96 op=LOAD Jan 23 17:31:56.240000 audit: BPF prog-id=73 op=UNLOAD Jan 23 17:31:56.240000 audit: BPF prog-id=97 op=LOAD Jan 23 17:31:56.240000 audit: BPF prog-id=98 op=LOAD Jan 23 17:31:56.240000 audit: BPF prog-id=74 op=UNLOAD Jan 23 17:31:56.240000 audit: BPF prog-id=75 op=UNLOAD Jan 23 17:31:56.246000 audit: BPF prog-id=99 op=LOAD Jan 23 17:31:56.246000 audit: BPF prog-id=70 op=UNLOAD Jan 23 17:31:56.246000 audit: BPF prog-id=100 op=LOAD Jan 23 17:31:56.246000 audit: BPF prog-id=101 op=LOAD Jan 23 17:31:56.246000 audit: BPF prog-id=71 op=UNLOAD Jan 23 17:31:56.246000 audit: BPF prog-id=72 op=UNLOAD Jan 23 17:31:56.246000 audit: BPF prog-id=102 op=LOAD Jan 23 17:31:56.246000 audit: BPF prog-id=86 op=UNLOAD Jan 23 17:31:56.247000 audit: BPF prog-id=103 op=LOAD Jan 23 17:31:56.247000 audit: BPF prog-id=85 op=UNLOAD Jan 23 17:31:56.247000 audit: BPF prog-id=104 op=LOAD Jan 23 17:31:56.247000 audit: BPF prog-id=105 op=LOAD Jan 23 17:31:56.247000 audit: BPF prog-id=76 op=UNLOAD Jan 23 17:31:56.247000 audit: BPF prog-id=77 op=UNLOAD Jan 23 17:31:56.248000 audit: BPF prog-id=106 op=LOAD Jan 23 17:31:56.248000 audit: BPF prog-id=81 op=UNLOAD Jan 23 17:31:56.858453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:31:56.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:31:56.873131 (kubelet)[3265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:31:56.950564 kubelet[3265]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:31:56.950564 kubelet[3265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:31:56.951222 kubelet[3265]: I0123 17:31:56.951172 3265 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:31:57.569558 kubelet[3265]: I0123 17:31:57.569507 3265 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 17:31:57.569558 kubelet[3265]: I0123 17:31:57.569548 3265 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:31:57.570577 kubelet[3265]: I0123 17:31:57.570554 3265 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 17:31:57.570577 kubelet[3265]: I0123 17:31:57.570575 3265 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:31:57.570917 kubelet[3265]: I0123 17:31:57.570900 3265 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:31:57.582870 kubelet[3265]: E0123 17:31:57.582567 3265 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:31:57.583295 kubelet[3265]: I0123 17:31:57.583265 3265 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:31:57.588478 kubelet[3265]: I0123 17:31:57.587354 3265 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:31:57.590031 kubelet[3265]: I0123 17:31:57.590012 3265 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 17:31:57.590383 kubelet[3265]: I0123 17:31:57.590354 3265 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:31:57.590606 kubelet[3265]: I0123 17:31:57.590458 3265 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-a-71c1b0067a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:31:57.590742 kubelet[3265]: I0123 17:31:57.590730 3265 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:31:57.590806 kubelet[3265]: I0123 17:31:57.590799 3265 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 17:31:57.590996 kubelet[3265]: I0123 17:31:57.590984 3265 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 17:31:57.619629 kubelet[3265]: I0123 17:31:57.619588 3265 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:31:57.634479 kubelet[3265]: I0123 17:31:57.634445 3265 kubelet.go:475] "Attempting to sync node with API server" Jan 23 17:31:57.635232 kubelet[3265]: I0123 17:31:57.635213 3265 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:31:57.635357 kubelet[3265]: E0123 17:31:57.635322 3265 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-a-71c1b0067a&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:31:57.663417 kubelet[3265]: I0123 17:31:57.663385 3265 kubelet.go:387] "Adding apiserver pod source" Jan 23 17:31:57.663561 kubelet[3265]: I0123 17:31:57.663550 3265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:31:57.664553 kubelet[3265]: E0123 17:31:57.664526 3265 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:31:57.666040 kubelet[3265]: I0123 17:31:57.665238 3265 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 17:31:57.666040 kubelet[3265]: I0123 17:31:57.665638 3265 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:31:57.666040 kubelet[3265]: I0123 17:31:57.665656 3265 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 17:31:57.666040 kubelet[3265]: W0123 17:31:57.665692 3265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:31:57.668230 kubelet[3265]: I0123 17:31:57.668216 3265 server.go:1262] "Started kubelet" Jan 23 17:31:57.670167 kubelet[3265]: I0123 17:31:57.670148 3265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:31:57.672350 kubelet[3265]: E0123 17:31:57.671225 3265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.1.0-a-71c1b0067a.188d6c7e24f486b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.1.0-a-71c1b0067a,UID:ci-4547.1.0-a-71c1b0067a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.1.0-a-71c1b0067a,},FirstTimestamp:2026-01-23 17:31:57.668157112 +0000 UTC m=+0.782051162,LastTimestamp:2026-01-23 17:31:57.668157112 +0000 UTC m=+0.782051162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.1.0-a-71c1b0067a,}" Jan 23 17:31:57.674327 kubelet[3265]: E0123 17:31:57.674308 3265 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:31:57.674545 kubelet[3265]: I0123 17:31:57.674525 3265 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:31:57.675271 kubelet[3265]: I0123 17:31:57.675251 3265 server.go:310] "Adding debug handlers to kubelet server" Jan 23 17:31:57.678000 kubelet[3265]: I0123 17:31:57.677951 3265 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:31:57.678075 kubelet[3265]: I0123 17:31:57.678012 3265 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 17:31:57.678208 kubelet[3265]: I0123 17:31:57.678190 3265 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:31:57.678435 kubelet[3265]: I0123 17:31:57.678409 3265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:31:57.677000 audit[3303]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:57.677000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffbe953e0 a2=0 a3=0 items=0 ppid=3265 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:57.677000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 17:31:57.679869 kubelet[3265]: I0123 17:31:57.679690 3265 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 17:31:57.679869 kubelet[3265]: I0123 17:31:57.679786 3265 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 17:31:57.679869 kubelet[3265]: I0123 17:31:57.679870 3265 reconciler.go:29] "Reconciler: start to sync state" Jan 23 17:31:57.679000 audit[3304]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:57.679000 audit[3304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea085650 a2=0 a3=0 items=0 ppid=3265 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:57.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 17:31:57.681038 kubelet[3265]: E0123 17:31:57.680035 3265 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" Jan 23 17:31:57.681038 kubelet[3265]: E0123 17:31:57.680595 3265 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:31:57.681038 kubelet[3265]: E0123 17:31:57.680644 3265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-71c1b0067a?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="200ms" Jan 23 17:31:57.684476 kubelet[3265]: I0123 17:31:57.684451 3265 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:31:57.684476 kubelet[3265]: I0123 17:31:57.684469 3265 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:31:57.684568 kubelet[3265]: I0123 17:31:57.684550 3265 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:31:57.684000 audit[3306]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:57.684000 audit[3306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffb643fb0 a2=0 a3=0 items=0 ppid=3265 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:57.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:31:57.691000 audit[3310]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:57.691000 audit[3310]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff47fc480 a2=0 a3=0 items=0 ppid=3265 pid=3310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:57.691000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:31:57.698044 kubelet[3265]: I0123 17:31:57.697572 3265 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:31:57.698044 kubelet[3265]: I0123 17:31:57.697589 3265 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:31:57.698044 kubelet[3265]: I0123 17:31:57.697608 3265 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:31:57.780772 kubelet[3265]: E0123 17:31:57.780730 3265 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" Jan 23 17:31:57.881341 kubelet[3265]: E0123 17:31:57.881291 3265 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" Jan 23 17:31:57.881687 kubelet[3265]: E0123 17:31:57.881654 3265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-71c1b0067a?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="400ms" Jan 23 17:31:57.982095 kubelet[3265]: E0123 17:31:57.982044 3265 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" Jan 23 17:31:58.082469 kubelet[3265]: E0123 17:31:58.082421 3265 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" Jan 23 17:31:58.220813 kubelet[3265]: E0123 17:31:58.183323 3265 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" Jan 23 17:31:58.222885 kubelet[3265]: I0123 17:31:58.222422 3265 policy_none.go:49] "None policy: Start" Jan 23 17:31:58.222885 kubelet[3265]: I0123 17:31:58.222456 3265 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 17:31:58.222885 kubelet[3265]: I0123 17:31:58.222468 3265 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 17:31:58.223000 audit[3314]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:58.223000 audit[3314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffce80fc80 a2=0 a3=0 items=0 ppid=3265 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:58.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 23 17:31:58.225003 kubelet[3265]: I0123 17:31:58.224910 3265 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 17:31:58.224000 audit[3315]: NETFILTER_CFG table=mangle:50 family=2 entries=1 op=nft_register_chain pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:58.224000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff15aab10 a2=0 a3=0 items=0 ppid=3265 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:58.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 17:31:58.225000 audit[3316]: NETFILTER_CFG table=mangle:51 family=10 entries=2 op=nft_register_chain pid=3316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:58.225000 audit[3316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe63ddc00 a2=0 a3=0 items=0 ppid=3265 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:58.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 23 17:31:58.226505 kubelet[3265]: I0123 17:31:58.226487 3265 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 17:31:58.226601 kubelet[3265]: I0123 17:31:58.226594 3265 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 17:31:58.226000 audit[3317]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:58.226000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4c6f650 a2=0 a3=0 items=0 ppid=3265 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:58.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 17:31:58.227750 kubelet[3265]: I0123 17:31:58.227271 3265 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 17:31:58.227750 kubelet[3265]: E0123 17:31:58.227327 3265 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:31:58.227000 audit[3318]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3318 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:58.227000 audit[3318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd263f260 a2=0 a3=0 items=0 ppid=3265 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:58.229390 kubelet[3265]: E0123 17:31:58.229360 3265 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:31:58.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 23 17:31:58.228000 audit[3320]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:31:58.228000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff4f1cb00 a2=0 a3=0 items=0 ppid=3265 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:58.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 17:31:58.229000 audit[3321]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3321 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:58.229000 audit[3321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe44d1760 a2=0 a3=0 items=0 ppid=3265 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:58.229000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 23 17:31:58.230000 audit[3322]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:31:58.230000 audit[3322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea060960 a2=0 a3=0 items=0 ppid=3265 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:58.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 23 17:31:58.233897 kubelet[3265]: I0123 17:31:58.233869 3265 policy_none.go:47] "Start" Jan 23 17:31:58.237769 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:31:58.249157 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:31:58.252884 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:31:58.262687 kubelet[3265]: E0123 17:31:58.262655 3265 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:31:58.263501 kubelet[3265]: I0123 17:31:58.263485 3265 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:31:58.263933 kubelet[3265]: I0123 17:31:58.263501 3265 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:31:58.263933 kubelet[3265]: I0123 17:31:58.263831 3265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:31:58.265882 kubelet[3265]: E0123 17:31:58.265747 3265 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:31:58.265882 kubelet[3265]: E0123 17:31:58.265820 3265 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.1.0-a-71c1b0067a\" not found" Jan 23 17:31:58.282995 kubelet[3265]: E0123 17:31:58.282955 3265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-71c1b0067a?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="800ms" Jan 23 17:31:58.342219 systemd[1]: Created slice kubepods-burstable-podab736506c78ee50a9a0a608b7c06f782.slice - libcontainer container kubepods-burstable-podab736506c78ee50a9a0a608b7c06f782.slice. Jan 23 17:31:58.349578 kubelet[3265]: E0123 17:31:58.349531 3265 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.366245 kubelet[3265]: I0123 17:31:58.366213 3265 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.366597 kubelet[3265]: E0123 17:31:58.366567 3265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.384530 kubelet[3265]: I0123 17:31:58.384299 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab736506c78ee50a9a0a608b7c06f782-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-a-71c1b0067a\" (UID: \"ab736506c78ee50a9a0a608b7c06f782\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.384530 kubelet[3265]: I0123 17:31:58.384329 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.384530 kubelet[3265]: I0123 17:31:58.384348 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.384530 kubelet[3265]: I0123 17:31:58.384359 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.384530 kubelet[3265]: I0123 17:31:58.384368 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.384722 kubelet[3265]: I0123 17:31:58.384379 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.384722 kubelet[3265]: I0123 17:31:58.384389 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab736506c78ee50a9a0a608b7c06f782-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-a-71c1b0067a\" (UID: \"ab736506c78ee50a9a0a608b7c06f782\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.384722 kubelet[3265]: I0123 17:31:58.384398 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab736506c78ee50a9a0a608b7c06f782-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-a-71c1b0067a\" (UID: \"ab736506c78ee50a9a0a608b7c06f782\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.389474 systemd[1]: Created slice kubepods-burstable-pod22aa9574dd6400146c1627250df941e5.slice - libcontainer container kubepods-burstable-pod22aa9574dd6400146c1627250df941e5.slice. Jan 23 17:31:58.391231 kubelet[3265]: E0123 17:31:58.391203 3265 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.438114 systemd[1]: Created slice kubepods-burstable-pod928ca77475bcbe3b71ad85e8027d194e.slice - libcontainer container kubepods-burstable-pod928ca77475bcbe3b71ad85e8027d194e.slice. Jan 23 17:31:58.439695 kubelet[3265]: E0123 17:31:58.439668 3265 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.484371 kubelet[3265]: E0123 17:31:58.484226 3265 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:31:58.484683 kubelet[3265]: I0123 17:31:58.484585 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/928ca77475bcbe3b71ad85e8027d194e-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-a-71c1b0067a\" (UID: \"928ca77475bcbe3b71ad85e8027d194e\") " pod="kube-system/kube-scheduler-ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.569227 kubelet[3265]: I0123 17:31:58.569194 3265 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.569572 kubelet[3265]: E0123 17:31:58.569542 3265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.655749 containerd[2084]: time="2026-01-23T17:31:58.655706629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-a-71c1b0067a,Uid:ab736506c78ee50a9a0a608b7c06f782,Namespace:kube-system,Attempt:0,}" Jan 23 17:31:58.696906 containerd[2084]: time="2026-01-23T17:31:58.696861204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-a-71c1b0067a,Uid:22aa9574dd6400146c1627250df941e5,Namespace:kube-system,Attempt:0,}" Jan 23 17:31:58.745181 containerd[2084]: time="2026-01-23T17:31:58.744922294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-a-71c1b0067a,Uid:928ca77475bcbe3b71ad85e8027d194e,Namespace:kube-system,Attempt:0,}" Jan 23 17:31:58.896217 kubelet[3265]: E0123 17:31:58.896160 3265 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-a-71c1b0067a&limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:31:58.972029 kubelet[3265]: I0123 17:31:58.971990 3265 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.972313 kubelet[3265]: E0123 17:31:58.972268 3265 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.22:6443/api/v1/nodes\": dial tcp 10.200.20.22:6443: connect: connection refused" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:58.993353 kubelet[3265]: E0123 17:31:58.993317 3265 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:31:59.084222 kubelet[3265]: E0123 17:31:59.084091 3265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-a-71c1b0067a?timeout=10s\": dial tcp 10.200.20.22:6443: connect: connection refused" interval="1.6s" Jan 23 17:31:59.315738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351078707.mount: Deactivated successfully. Jan 23 17:31:59.334969 containerd[2084]: time="2026-01-23T17:31:59.334706960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.342792 containerd[2084]: time="2026-01-23T17:31:59.342714873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:31:59.345489 containerd[2084]: time="2026-01-23T17:31:59.345446565Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.348892 containerd[2084]: time="2026-01-23T17:31:59.348431688Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.353876 containerd[2084]: time="2026-01-23T17:31:59.353311439Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:31:59.355991 containerd[2084]: time="2026-01-23T17:31:59.355951558Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.358890 containerd[2084]: time="2026-01-23T17:31:59.358598965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:31:59.359129 containerd[2084]: time="2026-01-23T17:31:59.359101794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 697.116534ms" Jan 23 17:31:59.360868 containerd[2084]: time="2026-01-23T17:31:59.360768169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:31:59.363692 containerd[2084]: time="2026-01-23T17:31:59.363647661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 613.520253ms" Jan 23 17:31:59.388040 containerd[2084]: time="2026-01-23T17:31:59.387977227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 685.352558ms" Jan 23 17:31:59.408670 containerd[2084]: time="2026-01-23T17:31:59.408492239Z" level=info msg="connecting to shim 97a63114061c40476d58ebee5b2bc0d774905b2846f97932bed53b7ffa368a31" address="unix:///run/containerd/s/d42191d7da8cb2026097d4ffb4116934d2c07ea877595374bbd4d9e0d79f3760" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:31:59.410489 containerd[2084]: time="2026-01-23T17:31:59.410457952Z" level=info msg="connecting to shim 72b3a46120fe7f75ba1b5bef091d5219258f4a32e2a52dbbc6a5bfbfa929649c" address="unix:///run/containerd/s/5c35ccc58652e0a53fd0000544d7dd5473083ad9e35563039498648dd4c904c4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:31:59.436060 systemd[1]: Started cri-containerd-97a63114061c40476d58ebee5b2bc0d774905b2846f97932bed53b7ffa368a31.scope - libcontainer container 97a63114061c40476d58ebee5b2bc0d774905b2846f97932bed53b7ffa368a31. Jan 23 17:31:59.439251 systemd[1]: Started cri-containerd-72b3a46120fe7f75ba1b5bef091d5219258f4a32e2a52dbbc6a5bfbfa929649c.scope - libcontainer container 72b3a46120fe7f75ba1b5bef091d5219258f4a32e2a52dbbc6a5bfbfa929649c. Jan 23 17:31:59.446504 containerd[2084]: time="2026-01-23T17:31:59.446398085Z" level=info msg="connecting to shim 359006ce37f727d309315132e06a9be60ab4d49e6be82e9be708e02058a3f287" address="unix:///run/containerd/s/5b4b75fa6b51a04fadfea740ee1cc70e9b18056de0cc062ca43da810c620d763" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:31:59.463187 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 23 17:31:59.463315 kernel: audit: type=1334 audit(1769189519.454:381): prog-id=107 op=LOAD Jan 23 17:31:59.454000 audit: BPF prog-id=107 op=LOAD Jan 23 17:31:59.472991 kernel: audit: type=1334 audit(1769189519.458:382): prog-id=108 op=LOAD Jan 23 17:31:59.458000 audit: BPF prog-id=108 op=LOAD Jan 23 17:31:59.458000 audit[3360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.492355 kernel: audit: type=1300 audit(1769189519.458:382): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.509386 kernel: audit: type=1327 audit(1769189519.458:382): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.461000 audit: BPF prog-id=108 op=UNLOAD Jan 23 17:31:59.516369 kernel: audit: type=1334 audit(1769189519.461:383): prog-id=108 op=UNLOAD Jan 23 17:31:59.461000 audit[3360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.553198 kernel: audit: type=1300 audit(1769189519.461:383): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.553313 kernel: audit: type=1327 audit(1769189519.461:383): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.461000 audit: BPF prog-id=109 op=LOAD Jan 23 17:31:59.557460 kernel: audit: type=1334 audit(1769189519.461:384): prog-id=109 op=LOAD Jan 23 17:31:59.461000 audit[3360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.574134 kernel: audit: type=1300 audit(1769189519.461:384): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.575701 systemd[1]: Started cri-containerd-359006ce37f727d309315132e06a9be60ab4d49e6be82e9be708e02058a3f287.scope - libcontainer container 359006ce37f727d309315132e06a9be60ab4d49e6be82e9be708e02058a3f287. Jan 23 17:31:59.583027 containerd[2084]: time="2026-01-23T17:31:59.582516446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-a-71c1b0067a,Uid:ab736506c78ee50a9a0a608b7c06f782,Namespace:kube-system,Attempt:0,} returns sandbox id \"97a63114061c40476d58ebee5b2bc0d774905b2846f97932bed53b7ffa368a31\"" Jan 23 17:31:59.585781 containerd[2084]: time="2026-01-23T17:31:59.585678115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-a-71c1b0067a,Uid:928ca77475bcbe3b71ad85e8027d194e,Namespace:kube-system,Attempt:0,} returns sandbox id \"72b3a46120fe7f75ba1b5bef091d5219258f4a32e2a52dbbc6a5bfbfa929649c\"" Jan 23 17:31:59.590429 kernel: audit: type=1327 audit(1769189519.461:384): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.592430 containerd[2084]: time="2026-01-23T17:31:59.592385458Z" level=info msg="CreateContainer within sandbox \"97a63114061c40476d58ebee5b2bc0d774905b2846f97932bed53b7ffa368a31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:31:59.462000 audit: BPF prog-id=110 op=LOAD Jan 23 17:31:59.462000 audit[3360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.462000 audit: BPF prog-id=110 op=UNLOAD Jan 23 17:31:59.462000 audit[3360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.462000 audit: BPF prog-id=109 op=UNLOAD Jan 23 17:31:59.462000 audit[3360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.462000 audit: BPF prog-id=111 op=LOAD Jan 23 17:31:59.462000 audit[3360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3339 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937613633313134303631633430343736643538656265653562326263 Jan 23 17:31:59.464000 audit: BPF prog-id=112 op=LOAD Jan 23 17:31:59.465000 audit: BPF prog-id=113 op=LOAD Jan 23 17:31:59.465000 audit[3366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3344 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623361343631323066653766373562613162356265663039316435 Jan 23 17:31:59.465000 audit: BPF prog-id=113 op=UNLOAD Jan 23 17:31:59.465000 audit[3366]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3344 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623361343631323066653766373562613162356265663039316435 Jan 23 17:31:59.465000 audit: BPF prog-id=114 op=LOAD Jan 23 17:31:59.465000 audit[3366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3344 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623361343631323066653766373562613162356265663039316435 Jan 23 17:31:59.465000 audit: BPF prog-id=115 op=LOAD Jan 23 17:31:59.465000 audit[3366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3344 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623361343631323066653766373562613162356265663039316435 Jan 23 17:31:59.465000 audit: BPF prog-id=115 op=UNLOAD Jan 23 17:31:59.465000 audit[3366]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3344 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623361343631323066653766373562613162356265663039316435 Jan 23 17:31:59.465000 audit: BPF prog-id=114 op=UNLOAD Jan 23 17:31:59.465000 audit[3366]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3344 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623361343631323066653766373562613162356265663039316435 Jan 23 17:31:59.465000 audit: BPF prog-id=116 op=LOAD Jan 23 17:31:59.465000 audit[3366]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3344 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732623361343631323066653766373562613162356265663039316435 Jan 23 17:31:59.601000 audit: BPF prog-id=117 op=LOAD Jan 23 17:31:59.603000 audit: BPF prog-id=118 op=LOAD Jan 23 17:31:59.603000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3398 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393030366365333766373237643330393331353133326530366139 Jan 23 17:31:59.603000 audit: BPF prog-id=118 op=UNLOAD Jan 23 17:31:59.603000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3398 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393030366365333766373237643330393331353133326530366139 Jan 23 17:31:59.603000 audit: BPF prog-id=119 op=LOAD Jan 23 17:31:59.603000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3398 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393030366365333766373237643330393331353133326530366139 Jan 23 17:31:59.603000 audit: BPF prog-id=120 op=LOAD Jan 23 17:31:59.603000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3398 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393030366365333766373237643330393331353133326530366139 Jan 23 17:31:59.603000 audit: BPF prog-id=120 op=UNLOAD Jan 23 17:31:59.603000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3398 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393030366365333766373237643330393331353133326530366139 Jan 23 17:31:59.603000 audit: BPF prog-id=119 op=UNLOAD Jan 23 17:31:59.603000 audit[3424]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3398 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393030366365333766373237643330393331353133326530366139 Jan 23 17:31:59.603000 audit: BPF prog-id=121 op=LOAD Jan 23 17:31:59.603000 audit[3424]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3398 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335393030366365333766373237643330393331353133326530366139 Jan 23 17:31:59.609753 containerd[2084]: time="2026-01-23T17:31:59.609715576Z" level=info msg="Container f4bf6b9ab470998d9c6a921453b6c056b631b732518898cc9d7ca7b1e1f18059: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:31:59.609955 containerd[2084]: time="2026-01-23T17:31:59.609836895Z" level=info msg="CreateContainer within sandbox \"72b3a46120fe7f75ba1b5bef091d5219258f4a32e2a52dbbc6a5bfbfa929649c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:31:59.630655 containerd[2084]: time="2026-01-23T17:31:59.630559447Z" level=info msg="CreateContainer within sandbox \"97a63114061c40476d58ebee5b2bc0d774905b2846f97932bed53b7ffa368a31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4bf6b9ab470998d9c6a921453b6c056b631b732518898cc9d7ca7b1e1f18059\"" Jan 23 17:31:59.631703 containerd[2084]: time="2026-01-23T17:31:59.631605411Z" level=info msg="StartContainer for \"f4bf6b9ab470998d9c6a921453b6c056b631b732518898cc9d7ca7b1e1f18059\"" Jan 23 17:31:59.633630 containerd[2084]: time="2026-01-23T17:31:59.633601461Z" level=info msg="connecting to shim f4bf6b9ab470998d9c6a921453b6c056b631b732518898cc9d7ca7b1e1f18059" address="unix:///run/containerd/s/d42191d7da8cb2026097d4ffb4116934d2c07ea877595374bbd4d9e0d79f3760" protocol=ttrpc version=3 Jan 23 17:31:59.635762 containerd[2084]: time="2026-01-23T17:31:59.635640601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-a-71c1b0067a,Uid:22aa9574dd6400146c1627250df941e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"359006ce37f727d309315132e06a9be60ab4d49e6be82e9be708e02058a3f287\"" Jan 23 17:31:59.645520 containerd[2084]: time="2026-01-23T17:31:59.645391566Z" level=info msg="Container d32847bdb239c91f03fea232f77eb12273f19f8c38d7fbdccadad5a783e5d7c2: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:31:59.647155 containerd[2084]: time="2026-01-23T17:31:59.647090575Z" level=info msg="CreateContainer within sandbox \"359006ce37f727d309315132e06a9be60ab4d49e6be82e9be708e02058a3f287\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:31:59.658259 systemd[1]: Started cri-containerd-f4bf6b9ab470998d9c6a921453b6c056b631b732518898cc9d7ca7b1e1f18059.scope - libcontainer container f4bf6b9ab470998d9c6a921453b6c056b631b732518898cc9d7ca7b1e1f18059. Jan 23 17:31:59.663092 containerd[2084]: time="2026-01-23T17:31:59.662912687Z" level=info msg="CreateContainer within sandbox \"72b3a46120fe7f75ba1b5bef091d5219258f4a32e2a52dbbc6a5bfbfa929649c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d32847bdb239c91f03fea232f77eb12273f19f8c38d7fbdccadad5a783e5d7c2\"" Jan 23 17:31:59.665661 containerd[2084]: time="2026-01-23T17:31:59.664447015Z" level=info msg="StartContainer for \"d32847bdb239c91f03fea232f77eb12273f19f8c38d7fbdccadad5a783e5d7c2\"" Jan 23 17:31:59.665661 containerd[2084]: time="2026-01-23T17:31:59.665246541Z" level=info msg="connecting to shim d32847bdb239c91f03fea232f77eb12273f19f8c38d7fbdccadad5a783e5d7c2" address="unix:///run/containerd/s/5c35ccc58652e0a53fd0000544d7dd5473083ad9e35563039498648dd4c904c4" protocol=ttrpc version=3 Jan 23 17:31:59.671000 audit: BPF prog-id=122 op=LOAD Jan 23 17:31:59.672000 audit: BPF prog-id=123 op=LOAD Jan 23 17:31:59.672000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3339 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626636623961623437303939386439633661393231343533623663 Jan 23 17:31:59.672000 audit: BPF prog-id=123 op=UNLOAD Jan 23 17:31:59.672000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626636623961623437303939386439633661393231343533623663 Jan 23 17:31:59.673000 audit: BPF prog-id=124 op=LOAD Jan 23 17:31:59.673000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3339 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626636623961623437303939386439633661393231343533623663 Jan 23 17:31:59.673000 audit: BPF prog-id=125 op=LOAD Jan 23 17:31:59.673000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3339 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626636623961623437303939386439633661393231343533623663 Jan 23 17:31:59.673000 audit: BPF prog-id=125 op=UNLOAD Jan 23 17:31:59.673000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626636623961623437303939386439633661393231343533623663 Jan 23 17:31:59.673000 audit: BPF prog-id=124 op=UNLOAD Jan 23 17:31:59.673000 audit[3464]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3339 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626636623961623437303939386439633661393231343533623663 Jan 23 17:31:59.673000 audit: BPF prog-id=126 op=LOAD Jan 23 17:31:59.673000 audit[3464]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3339 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634626636623961623437303939386439633661393231343533623663 Jan 23 17:31:59.678381 containerd[2084]: time="2026-01-23T17:31:59.678344585Z" level=info msg="Container e2b0ce52e8fca92e6a03564b2aab5746bc6b96c08153da452e7dbff5dc6689a2: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:31:59.689049 systemd[1]: Started cri-containerd-d32847bdb239c91f03fea232f77eb12273f19f8c38d7fbdccadad5a783e5d7c2.scope - libcontainer container d32847bdb239c91f03fea232f77eb12273f19f8c38d7fbdccadad5a783e5d7c2. Jan 23 17:31:59.700119 containerd[2084]: time="2026-01-23T17:31:59.698840780Z" level=info msg="CreateContainer within sandbox \"359006ce37f727d309315132e06a9be60ab4d49e6be82e9be708e02058a3f287\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e2b0ce52e8fca92e6a03564b2aab5746bc6b96c08153da452e7dbff5dc6689a2\"" Jan 23 17:31:59.702288 containerd[2084]: time="2026-01-23T17:31:59.702114607Z" level=info msg="StartContainer for \"e2b0ce52e8fca92e6a03564b2aab5746bc6b96c08153da452e7dbff5dc6689a2\"" Jan 23 17:31:59.702959 containerd[2084]: time="2026-01-23T17:31:59.702930430Z" level=info msg="connecting to shim e2b0ce52e8fca92e6a03564b2aab5746bc6b96c08153da452e7dbff5dc6689a2" address="unix:///run/containerd/s/5b4b75fa6b51a04fadfea740ee1cc70e9b18056de0cc062ca43da810c620d763" protocol=ttrpc version=3 Jan 23 17:31:59.703000 audit: BPF prog-id=127 op=LOAD Jan 23 17:31:59.704000 audit: BPF prog-id=128 op=LOAD Jan 23 17:31:59.704000 audit[3483]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3344 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323834376264623233396339316630336665613233326637376562 Jan 23 17:31:59.704000 audit: BPF prog-id=128 op=UNLOAD Jan 23 17:31:59.704000 audit[3483]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3344 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323834376264623233396339316630336665613233326637376562 Jan 23 17:31:59.705000 audit: BPF prog-id=129 op=LOAD Jan 23 17:31:59.705000 audit[3483]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3344 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323834376264623233396339316630336665613233326637376562 Jan 23 17:31:59.705000 audit: BPF prog-id=130 op=LOAD Jan 23 17:31:59.705000 audit[3483]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3344 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323834376264623233396339316630336665613233326637376562 Jan 23 17:31:59.705000 audit: BPF prog-id=130 op=UNLOAD Jan 23 17:31:59.705000 audit[3483]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3344 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323834376264623233396339316630336665613233326637376562 Jan 23 17:31:59.705000 audit: BPF prog-id=129 op=UNLOAD Jan 23 17:31:59.705000 audit[3483]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3344 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323834376264623233396339316630336665613233326637376562 Jan 23 17:31:59.705000 audit: BPF prog-id=131 op=LOAD Jan 23 17:31:59.705000 audit[3483]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3344 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323834376264623233396339316630336665613233326637376562 Jan 23 17:31:59.710627 kubelet[3265]: E0123 17:31:59.710590 3265 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:31:59.717131 containerd[2084]: time="2026-01-23T17:31:59.717011306Z" level=info msg="StartContainer for \"f4bf6b9ab470998d9c6a921453b6c056b631b732518898cc9d7ca7b1e1f18059\" returns successfully" Jan 23 17:31:59.734059 systemd[1]: Started cri-containerd-e2b0ce52e8fca92e6a03564b2aab5746bc6b96c08153da452e7dbff5dc6689a2.scope - libcontainer container e2b0ce52e8fca92e6a03564b2aab5746bc6b96c08153da452e7dbff5dc6689a2. Jan 23 17:31:59.748415 kubelet[3265]: E0123 17:31:59.748370 3265 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:31:59.752040 containerd[2084]: time="2026-01-23T17:31:59.752006346Z" level=info msg="StartContainer for \"d32847bdb239c91f03fea232f77eb12273f19f8c38d7fbdccadad5a783e5d7c2\" returns successfully" Jan 23 17:31:59.760000 audit: BPF prog-id=132 op=LOAD Jan 23 17:31:59.762000 audit: BPF prog-id=133 op=LOAD Jan 23 17:31:59.762000 audit[3509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3398 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623063653532653866636139326536613033353634623261616235 Jan 23 17:31:59.762000 audit: BPF prog-id=133 op=UNLOAD Jan 23 17:31:59.762000 audit[3509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3398 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623063653532653866636139326536613033353634623261616235 Jan 23 17:31:59.762000 audit: BPF prog-id=134 op=LOAD Jan 23 17:31:59.762000 audit[3509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3398 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623063653532653866636139326536613033353634623261616235 Jan 23 17:31:59.762000 audit: BPF prog-id=135 op=LOAD Jan 23 17:31:59.762000 audit[3509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3398 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623063653532653866636139326536613033353634623261616235 Jan 23 17:31:59.762000 audit: BPF prog-id=135 op=UNLOAD Jan 23 17:31:59.762000 audit[3509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3398 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623063653532653866636139326536613033353634623261616235 Jan 23 17:31:59.762000 audit: BPF prog-id=134 op=UNLOAD Jan 23 17:31:59.762000 audit[3509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3398 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623063653532653866636139326536613033353634623261616235 Jan 23 17:31:59.762000 audit: BPF prog-id=136 op=LOAD Jan 23 17:31:59.762000 audit[3509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3398 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:31:59.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623063653532653866636139326536613033353634623261616235 Jan 23 17:31:59.775631 kubelet[3265]: I0123 17:31:59.775472 3265 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:31:59.804106 containerd[2084]: time="2026-01-23T17:31:59.804065320Z" level=info msg="StartContainer for \"e2b0ce52e8fca92e6a03564b2aab5746bc6b96c08153da452e7dbff5dc6689a2\" returns successfully" Jan 23 17:32:00.239226 kubelet[3265]: E0123 17:32:00.239196 3265 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:00.241879 kubelet[3265]: E0123 17:32:00.241556 3265 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:00.245207 kubelet[3265]: E0123 17:32:00.245078 3265 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.246821 kubelet[3265]: E0123 17:32:01.246652 3265 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.246821 kubelet[3265]: E0123 17:32:01.246690 3265 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.266351 kubelet[3265]: E0123 17:32:01.266310 3265 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.1.0-a-71c1b0067a\" not found" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.380958 kubelet[3265]: I0123 17:32:01.380915 3265 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.381937 kubelet[3265]: I0123 17:32:01.381905 3265 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.482883 kubelet[3265]: E0123 17:32:01.482271 3265 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-a-71c1b0067a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.482883 kubelet[3265]: I0123 17:32:01.482302 3265 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.487883 kubelet[3265]: E0123 17:32:01.486969 3265 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.487883 kubelet[3265]: I0123 17:32:01.486999 3265 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.488643 kubelet[3265]: E0123 17:32:01.488614 3265 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-a-71c1b0067a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:01.668015 kubelet[3265]: I0123 17:32:01.667916 3265 apiserver.go:52] "Watching apiserver" Jan 23 17:32:01.680745 kubelet[3265]: I0123 17:32:01.680696 3265 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 17:32:03.810994 systemd[1]: Reload requested from client PID 3570 ('systemctl') (unit session-10.scope)... Jan 23 17:32:03.811381 systemd[1]: Reloading... Jan 23 17:32:03.918932 zram_generator::config[3624]: No configuration found. Jan 23 17:32:04.113912 systemd[1]: Reloading finished in 302 ms. Jan 23 17:32:04.140641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:04.154255 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:32:04.154697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:04.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:04.155005 systemd[1]: kubelet.service: Consumed 967ms CPU time, 121.2M memory peak. Jan 23 17:32:04.157703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:32:04.158000 audit: BPF prog-id=137 op=LOAD Jan 23 17:32:04.158000 audit: BPF prog-id=103 op=UNLOAD Jan 23 17:32:04.158000 audit: BPF prog-id=138 op=LOAD Jan 23 17:32:04.158000 audit: BPF prog-id=139 op=LOAD Jan 23 17:32:04.158000 audit: BPF prog-id=104 op=UNLOAD Jan 23 17:32:04.158000 audit: BPF prog-id=105 op=UNLOAD Jan 23 17:32:04.159000 audit: BPF prog-id=140 op=LOAD Jan 23 17:32:04.159000 audit: BPF prog-id=87 op=UNLOAD Jan 23 17:32:04.159000 audit: BPF prog-id=141 op=LOAD Jan 23 17:32:04.159000 audit: BPF prog-id=142 op=LOAD Jan 23 17:32:04.159000 audit: BPF prog-id=88 op=UNLOAD Jan 23 17:32:04.159000 audit: BPF prog-id=89 op=UNLOAD Jan 23 17:32:04.160000 audit: BPF prog-id=143 op=LOAD Jan 23 17:32:04.164000 audit: BPF prog-id=99 op=UNLOAD Jan 23 17:32:04.164000 audit: BPF prog-id=144 op=LOAD Jan 23 17:32:04.164000 audit: BPF prog-id=145 op=LOAD Jan 23 17:32:04.164000 audit: BPF prog-id=100 op=UNLOAD Jan 23 17:32:04.164000 audit: BPF prog-id=101 op=UNLOAD Jan 23 17:32:04.165000 audit: BPF prog-id=146 op=LOAD Jan 23 17:32:04.165000 audit: BPF prog-id=90 op=UNLOAD Jan 23 17:32:04.165000 audit: BPF prog-id=147 op=LOAD Jan 23 17:32:04.165000 audit: BPF prog-id=148 op=LOAD Jan 23 17:32:04.165000 audit: BPF prog-id=91 op=UNLOAD Jan 23 17:32:04.165000 audit: BPF prog-id=92 op=UNLOAD Jan 23 17:32:04.166000 audit: BPF prog-id=149 op=LOAD Jan 23 17:32:04.166000 audit: BPF prog-id=106 op=UNLOAD Jan 23 17:32:04.166000 audit: BPF prog-id=150 op=LOAD Jan 23 17:32:04.166000 audit: BPF prog-id=93 op=UNLOAD Jan 23 17:32:04.166000 audit: BPF prog-id=151 op=LOAD Jan 23 17:32:04.167000 audit: BPF prog-id=152 op=LOAD Jan 23 17:32:04.167000 audit: BPF prog-id=94 op=UNLOAD Jan 23 17:32:04.167000 audit: BPF prog-id=95 op=UNLOAD Jan 23 17:32:04.167000 audit: BPF prog-id=153 op=LOAD Jan 23 17:32:04.167000 audit: BPF prog-id=96 op=UNLOAD Jan 23 17:32:04.167000 audit: BPF prog-id=154 op=LOAD Jan 23 17:32:04.167000 audit: BPF prog-id=155 op=LOAD Jan 23 17:32:04.167000 audit: BPF prog-id=97 op=UNLOAD Jan 23 17:32:04.167000 audit: BPF prog-id=98 op=UNLOAD Jan 23 17:32:04.167000 audit: BPF prog-id=156 op=LOAD Jan 23 17:32:04.168000 audit: BPF prog-id=102 op=UNLOAD Jan 23 17:32:04.355395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:32:04.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:04.367238 (kubelet)[3685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:32:04.402416 kubelet[3685]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:32:04.402416 kubelet[3685]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:32:04.404213 kubelet[3685]: I0123 17:32:04.402451 3685 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:32:04.407515 kubelet[3685]: I0123 17:32:04.407485 3685 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 17:32:04.407515 kubelet[3685]: I0123 17:32:04.407511 3685 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:32:04.407730 kubelet[3685]: I0123 17:32:04.407542 3685 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 17:32:04.407730 kubelet[3685]: I0123 17:32:04.407549 3685 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:32:04.407730 kubelet[3685]: I0123 17:32:04.407691 3685 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:32:04.409015 kubelet[3685]: I0123 17:32:04.408984 3685 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 17:32:04.411105 kubelet[3685]: I0123 17:32:04.410902 3685 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:32:04.415108 kubelet[3685]: I0123 17:32:04.415090 3685 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:32:04.420605 kubelet[3685]: I0123 17:32:04.420577 3685 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 17:32:04.420792 kubelet[3685]: I0123 17:32:04.420759 3685 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:32:04.421020 kubelet[3685]: I0123 17:32:04.420784 3685 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-a-71c1b0067a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:32:04.421020 kubelet[3685]: I0123 17:32:04.420974 3685 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:32:04.421020 kubelet[3685]: I0123 17:32:04.420981 3685 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 17:32:04.421020 kubelet[3685]: I0123 17:32:04.421006 3685 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 17:32:04.421792 kubelet[3685]: I0123 17:32:04.421690 3685 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:32:04.421864 kubelet[3685]: I0123 17:32:04.421829 3685 kubelet.go:475] "Attempting to sync node with API server" Jan 23 17:32:04.422454 kubelet[3685]: I0123 17:32:04.422384 3685 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:32:04.424558 kubelet[3685]: I0123 17:32:04.424182 3685 kubelet.go:387] "Adding apiserver pod source" Jan 23 17:32:04.424558 kubelet[3685]: I0123 17:32:04.424207 3685 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:32:04.427607 kubelet[3685]: I0123 17:32:04.426666 3685 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 23 17:32:04.428121 kubelet[3685]: I0123 17:32:04.427946 3685 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:32:04.428121 kubelet[3685]: I0123 17:32:04.427973 3685 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 17:32:04.431334 kubelet[3685]: I0123 17:32:04.430889 3685 server.go:1262] "Started kubelet" Jan 23 17:32:04.432696 kubelet[3685]: I0123 17:32:04.432665 3685 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:32:04.443108 kubelet[3685]: I0123 17:32:04.442658 3685 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:32:04.443537 kubelet[3685]: I0123 17:32:04.443433 3685 server.go:310] "Adding debug handlers to kubelet server" Jan 23 17:32:04.445645 kubelet[3685]: I0123 17:32:04.445595 3685 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:32:04.445725 kubelet[3685]: I0123 17:32:04.445662 3685 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 17:32:04.445826 kubelet[3685]: I0123 17:32:04.445806 3685 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:32:04.446074 kubelet[3685]: I0123 17:32:04.446048 3685 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:32:04.447314 kubelet[3685]: I0123 17:32:04.447289 3685 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 17:32:04.451798 kubelet[3685]: I0123 17:32:04.451089 3685 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 17:32:04.451798 kubelet[3685]: I0123 17:32:04.451219 3685 reconciler.go:29] "Reconciler: start to sync state" Jan 23 17:32:04.453340 kubelet[3685]: I0123 17:32:04.453076 3685 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 17:32:04.454464 kubelet[3685]: I0123 17:32:04.454440 3685 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:32:04.454649 kubelet[3685]: I0123 17:32:04.454629 3685 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:32:04.454926 kubelet[3685]: I0123 17:32:04.454460 3685 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 17:32:04.455003 kubelet[3685]: I0123 17:32:04.454995 3685 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 17:32:04.455105 kubelet[3685]: I0123 17:32:04.455096 3685 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 17:32:04.455190 kubelet[3685]: E0123 17:32:04.455175 3685 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:32:04.461177 kubelet[3685]: E0123 17:32:04.461155 3685 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:32:04.462111 kubelet[3685]: I0123 17:32:04.462083 3685 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:32:04.499880 kubelet[3685]: I0123 17:32:04.499849 3685 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:32:04.500212 kubelet[3685]: I0123 17:32:04.500015 3685 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:32:04.500212 kubelet[3685]: I0123 17:32:04.500104 3685 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:32:04.501332 kubelet[3685]: I0123 17:32:04.501266 3685 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:32:04.501530 kubelet[3685]: I0123 17:32:04.501464 3685 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:32:04.501530 kubelet[3685]: I0123 17:32:04.501493 3685 policy_none.go:49] "None policy: Start" Jan 23 17:32:04.501530 kubelet[3685]: I0123 17:32:04.501502 3685 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 17:32:04.501530 kubelet[3685]: I0123 17:32:04.501515 3685 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 17:32:04.501830 kubelet[3685]: I0123 17:32:04.501803 3685 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 17:32:04.501830 kubelet[3685]: I0123 17:32:04.501818 3685 policy_none.go:47] "Start" Jan 23 17:32:04.506452 kubelet[3685]: E0123 17:32:04.505980 3685 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:32:04.506452 kubelet[3685]: I0123 17:32:04.506140 3685 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:32:04.506452 kubelet[3685]: I0123 17:32:04.506151 3685 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:32:04.509077 kubelet[3685]: I0123 17:32:04.508212 3685 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:32:04.511782 kubelet[3685]: E0123 17:32:04.511762 3685 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:32:04.556653 kubelet[3685]: I0123 17:32:04.556598 3685 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.556889 kubelet[3685]: I0123 17:32:04.556617 3685 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.557736 kubelet[3685]: I0123 17:32:04.557036 3685 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.565120 kubelet[3685]: I0123 17:32:04.565087 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 17:32:04.569529 kubelet[3685]: I0123 17:32:04.569308 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 17:32:04.569748 kubelet[3685]: I0123 17:32:04.569480 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 17:32:04.615186 kubelet[3685]: I0123 17:32:04.614234 3685 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.625143 kubelet[3685]: I0123 17:32:04.625034 3685 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.625335 kubelet[3685]: I0123 17:32:04.625208 3685 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752460 kubelet[3685]: I0123 17:32:04.752419 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752460 kubelet[3685]: I0123 17:32:04.752459 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/928ca77475bcbe3b71ad85e8027d194e-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-a-71c1b0067a\" (UID: \"928ca77475bcbe3b71ad85e8027d194e\") " pod="kube-system/kube-scheduler-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752460 kubelet[3685]: I0123 17:32:04.752472 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752460 kubelet[3685]: I0123 17:32:04.752510 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752460 kubelet[3685]: I0123 17:32:04.752536 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab736506c78ee50a9a0a608b7c06f782-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-a-71c1b0067a\" (UID: \"ab736506c78ee50a9a0a608b7c06f782\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752893 kubelet[3685]: I0123 17:32:04.752563 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab736506c78ee50a9a0a608b7c06f782-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-a-71c1b0067a\" (UID: \"ab736506c78ee50a9a0a608b7c06f782\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752893 kubelet[3685]: I0123 17:32:04.752579 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab736506c78ee50a9a0a608b7c06f782-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-a-71c1b0067a\" (UID: \"ab736506c78ee50a9a0a608b7c06f782\") " pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752893 kubelet[3685]: I0123 17:32:04.752593 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:04.752893 kubelet[3685]: I0123 17:32:04.752604 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22aa9574dd6400146c1627250df941e5-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" (UID: \"22aa9574dd6400146c1627250df941e5\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:05.426227 kubelet[3685]: I0123 17:32:05.426144 3685 apiserver.go:52] "Watching apiserver" Jan 23 17:32:05.451605 kubelet[3685]: I0123 17:32:05.451547 3685 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 17:32:05.485882 kubelet[3685]: I0123 17:32:05.485605 3685 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:05.485882 kubelet[3685]: I0123 17:32:05.485751 3685 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:05.486196 kubelet[3685]: I0123 17:32:05.486183 3685 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:05.498656 kubelet[3685]: I0123 17:32:05.498606 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 17:32:05.499282 kubelet[3685]: E0123 17:32:05.499251 3685 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-a-71c1b0067a\" already exists" pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:05.499457 kubelet[3685]: I0123 17:32:05.499194 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 17:32:05.499457 kubelet[3685]: E0123 17:32:05.499423 3685 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-a-71c1b0067a\" already exists" pod="kube-system/kube-scheduler-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:05.499692 kubelet[3685]: I0123 17:32:05.499562 3685 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 17:32:05.499692 kubelet[3685]: E0123 17:32:05.499596 3685 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.1.0-a-71c1b0067a\" already exists" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:05.513871 kubelet[3685]: I0123 17:32:05.513532 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.1.0-a-71c1b0067a" podStartSLOduration=1.513513823 podStartE2EDuration="1.513513823s" podCreationTimestamp="2026-01-23 17:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:05.503835183 +0000 UTC m=+1.133381470" watchObservedRunningTime="2026-01-23 17:32:05.513513823 +0000 UTC m=+1.143060118" Jan 23 17:32:05.523408 kubelet[3685]: I0123 17:32:05.523204 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.1.0-a-71c1b0067a" podStartSLOduration=1.523185496 podStartE2EDuration="1.523185496s" podCreationTimestamp="2026-01-23 17:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:05.513956358 +0000 UTC m=+1.143502645" watchObservedRunningTime="2026-01-23 17:32:05.523185496 +0000 UTC m=+1.152731783" Jan 23 17:32:05.535451 kubelet[3685]: I0123 17:32:05.535376 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.1.0-a-71c1b0067a" podStartSLOduration=1.535350497 podStartE2EDuration="1.535350497s" podCreationTimestamp="2026-01-23 17:32:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:05.52410832 +0000 UTC m=+1.153654631" watchObservedRunningTime="2026-01-23 17:32:05.535350497 +0000 UTC m=+1.164896784" Jan 23 17:32:08.860036 kubelet[3685]: I0123 17:32:08.859991 3685 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:32:08.861171 containerd[2084]: time="2026-01-23T17:32:08.860638578Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:32:08.862038 kubelet[3685]: I0123 17:32:08.860932 3685 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:32:09.568188 systemd[1]: Created slice kubepods-besteffort-podef5b6281_6139_43c0_8f54_be919a5cbc87.slice - libcontainer container kubepods-besteffort-podef5b6281_6139_43c0_8f54_be919a5cbc87.slice. Jan 23 17:32:09.580246 kubelet[3685]: I0123 17:32:09.580206 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef5b6281-6139-43c0-8f54-be919a5cbc87-kube-proxy\") pod \"kube-proxy-b75b4\" (UID: \"ef5b6281-6139-43c0-8f54-be919a5cbc87\") " pod="kube-system/kube-proxy-b75b4" Jan 23 17:32:09.580537 kubelet[3685]: I0123 17:32:09.580423 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef5b6281-6139-43c0-8f54-be919a5cbc87-xtables-lock\") pod \"kube-proxy-b75b4\" (UID: \"ef5b6281-6139-43c0-8f54-be919a5cbc87\") " pod="kube-system/kube-proxy-b75b4" Jan 23 17:32:09.580537 kubelet[3685]: I0123 17:32:09.580443 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef5b6281-6139-43c0-8f54-be919a5cbc87-lib-modules\") pod \"kube-proxy-b75b4\" (UID: \"ef5b6281-6139-43c0-8f54-be919a5cbc87\") " pod="kube-system/kube-proxy-b75b4" Jan 23 17:32:09.580537 kubelet[3685]: I0123 17:32:09.580454 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z847\" (UniqueName: \"kubernetes.io/projected/ef5b6281-6139-43c0-8f54-be919a5cbc87-kube-api-access-8z847\") pod \"kube-proxy-b75b4\" (UID: \"ef5b6281-6139-43c0-8f54-be919a5cbc87\") " pod="kube-system/kube-proxy-b75b4" Jan 23 17:32:09.884702 containerd[2084]: time="2026-01-23T17:32:09.884416815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b75b4,Uid:ef5b6281-6139-43c0-8f54-be919a5cbc87,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:09.927480 containerd[2084]: time="2026-01-23T17:32:09.927386864Z" level=info msg="connecting to shim d0e2657f297466e66e66bfc405ffe36483a5000897720dadf8f23015b6cb21b1" address="unix:///run/containerd/s/092f4f817ea8776dde7d365b222266a36fe3c8d4ec38af19ce898e8572783660" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:09.951068 systemd[1]: Started cri-containerd-d0e2657f297466e66e66bfc405ffe36483a5000897720dadf8f23015b6cb21b1.scope - libcontainer container d0e2657f297466e66e66bfc405ffe36483a5000897720dadf8f23015b6cb21b1. Jan 23 17:32:09.958000 audit: BPF prog-id=157 op=LOAD Jan 23 17:32:09.962599 kernel: kauditd_printk_skb: 164 callbacks suppressed Jan 23 17:32:09.962648 kernel: audit: type=1334 audit(1769189529.958:471): prog-id=157 op=LOAD Jan 23 17:32:09.966000 audit: BPF prog-id=158 op=LOAD Jan 23 17:32:09.971693 kernel: audit: type=1334 audit(1769189529.966:472): prog-id=158 op=LOAD Jan 23 17:32:09.966000 audit[3751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:09.987981 kernel: audit: type=1300 audit(1769189529.966:472): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:09.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:10.003987 kernel: audit: type=1327 audit(1769189529.966:472): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:09.966000 audit: BPF prog-id=158 op=UNLOAD Jan 23 17:32:10.008641 kernel: audit: type=1334 audit(1769189529.966:473): prog-id=158 op=UNLOAD Jan 23 17:32:09.966000 audit[3751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.024579 kernel: audit: type=1300 audit(1769189529.966:473): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:09.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:10.041318 kernel: audit: type=1327 audit(1769189529.966:473): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:09.966000 audit: BPF prog-id=159 op=LOAD Jan 23 17:32:10.046172 kernel: audit: type=1334 audit(1769189529.966:474): prog-id=159 op=LOAD Jan 23 17:32:09.966000 audit[3751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.064992 kernel: audit: type=1300 audit(1769189529.966:474): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:09.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:10.085992 kernel: audit: type=1327 audit(1769189529.966:474): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:09.966000 audit: BPF prog-id=160 op=LOAD Jan 23 17:32:09.966000 audit[3751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:09.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:09.966000 audit: BPF prog-id=160 op=UNLOAD Jan 23 17:32:09.966000 audit[3751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:09.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:09.966000 audit: BPF prog-id=159 op=UNLOAD Jan 23 17:32:09.966000 audit[3751]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:09.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:09.966000 audit: BPF prog-id=161 op=LOAD Jan 23 17:32:09.966000 audit[3751]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3740 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:09.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653236353766323937343636653636653636626663343035666665 Jan 23 17:32:10.102801 systemd[1]: Created slice kubepods-besteffort-pode228b3fe_6115_4d9c_ac94_4a2b51328303.slice - libcontainer container kubepods-besteffort-pode228b3fe_6115_4d9c_ac94_4a2b51328303.slice. Jan 23 17:32:10.110274 containerd[2084]: time="2026-01-23T17:32:10.109743473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b75b4,Uid:ef5b6281-6139-43c0-8f54-be919a5cbc87,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0e2657f297466e66e66bfc405ffe36483a5000897720dadf8f23015b6cb21b1\"" Jan 23 17:32:10.123206 containerd[2084]: time="2026-01-23T17:32:10.123164826Z" level=info msg="CreateContainer within sandbox \"d0e2657f297466e66e66bfc405ffe36483a5000897720dadf8f23015b6cb21b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:32:10.146410 containerd[2084]: time="2026-01-23T17:32:10.146273456Z" level=info msg="Container 23c97658808118aff27e484f72d3c6d08d88586e41fc91e55ca29b2f4e646909: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:10.162935 containerd[2084]: time="2026-01-23T17:32:10.162885208Z" level=info msg="CreateContainer within sandbox \"d0e2657f297466e66e66bfc405ffe36483a5000897720dadf8f23015b6cb21b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23c97658808118aff27e484f72d3c6d08d88586e41fc91e55ca29b2f4e646909\"" Jan 23 17:32:10.165409 containerd[2084]: time="2026-01-23T17:32:10.165001594Z" level=info msg="StartContainer for \"23c97658808118aff27e484f72d3c6d08d88586e41fc91e55ca29b2f4e646909\"" Jan 23 17:32:10.166515 containerd[2084]: time="2026-01-23T17:32:10.166464868Z" level=info msg="connecting to shim 23c97658808118aff27e484f72d3c6d08d88586e41fc91e55ca29b2f4e646909" address="unix:///run/containerd/s/092f4f817ea8776dde7d365b222266a36fe3c8d4ec38af19ce898e8572783660" protocol=ttrpc version=3 Jan 23 17:32:10.183080 kubelet[3685]: I0123 17:32:10.183040 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e228b3fe-6115-4d9c-ac94-4a2b51328303-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-2ghsd\" (UID: \"e228b3fe-6115-4d9c-ac94-4a2b51328303\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2ghsd" Jan 23 17:32:10.183080 kubelet[3685]: I0123 17:32:10.183077 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmrt5\" (UniqueName: \"kubernetes.io/projected/e228b3fe-6115-4d9c-ac94-4a2b51328303-kube-api-access-gmrt5\") pod \"tigera-operator-65cdcdfd6d-2ghsd\" (UID: \"e228b3fe-6115-4d9c-ac94-4a2b51328303\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2ghsd" Jan 23 17:32:10.185078 systemd[1]: Started cri-containerd-23c97658808118aff27e484f72d3c6d08d88586e41fc91e55ca29b2f4e646909.scope - libcontainer container 23c97658808118aff27e484f72d3c6d08d88586e41fc91e55ca29b2f4e646909. Jan 23 17:32:10.221000 audit: BPF prog-id=162 op=LOAD Jan 23 17:32:10.221000 audit[3777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3740 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633937363538383038313138616666323765343834663732643363 Jan 23 17:32:10.221000 audit: BPF prog-id=163 op=LOAD Jan 23 17:32:10.221000 audit[3777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3740 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633937363538383038313138616666323765343834663732643363 Jan 23 17:32:10.221000 audit: BPF prog-id=163 op=UNLOAD Jan 23 17:32:10.221000 audit[3777]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3740 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633937363538383038313138616666323765343834663732643363 Jan 23 17:32:10.221000 audit: BPF prog-id=162 op=UNLOAD Jan 23 17:32:10.221000 audit[3777]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3740 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633937363538383038313138616666323765343834663732643363 Jan 23 17:32:10.221000 audit: BPF prog-id=164 op=LOAD Jan 23 17:32:10.221000 audit[3777]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3740 pid=3777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233633937363538383038313138616666323765343834663732643363 Jan 23 17:32:10.243756 containerd[2084]: time="2026-01-23T17:32:10.243594140Z" level=info msg="StartContainer for \"23c97658808118aff27e484f72d3c6d08d88586e41fc91e55ca29b2f4e646909\" returns successfully" Jan 23 17:32:10.414301 containerd[2084]: time="2026-01-23T17:32:10.414193009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2ghsd,Uid:e228b3fe-6115-4d9c-ac94-4a2b51328303,Namespace:tigera-operator,Attempt:0,}" Jan 23 17:32:10.438000 audit[3842]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.438000 audit[3842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecfd2890 a2=0 a3=1 items=0 ppid=3791 pid=3842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.438000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 17:32:10.439000 audit[3843]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.439000 audit[3843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedbda580 a2=0 a3=1 items=0 ppid=3791 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 17:32:10.440000 audit[3844]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3844 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.440000 audit[3844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8507bd0 a2=0 a3=1 items=0 ppid=3791 pid=3844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 17:32:10.441000 audit[3846]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=3846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.441000 audit[3846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc37b7b40 a2=0 a3=1 items=0 ppid=3791 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 23 17:32:10.442000 audit[3848]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.442000 audit[3848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4ef3120 a2=0 a3=1 items=0 ppid=3791 pid=3848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 23 17:32:10.444000 audit[3849]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.444000 audit[3849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff6b0ce70 a2=0 a3=1 items=0 ppid=3791 pid=3849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.444000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 23 17:32:10.456493 containerd[2084]: time="2026-01-23T17:32:10.456026585Z" level=info msg="connecting to shim 3a1e968e61107d36dcc59dca6c281cd74c187d3710e0b8f8a132732d0b22c820" address="unix:///run/containerd/s/064e077980e7577f37be3a0fa4b74373323fc918d21984cc6d45521858029c99" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:10.477036 systemd[1]: Started cri-containerd-3a1e968e61107d36dcc59dca6c281cd74c187d3710e0b8f8a132732d0b22c820.scope - libcontainer container 3a1e968e61107d36dcc59dca6c281cd74c187d3710e0b8f8a132732d0b22c820. Jan 23 17:32:10.484000 audit: BPF prog-id=165 op=LOAD Jan 23 17:32:10.485000 audit: BPF prog-id=166 op=LOAD Jan 23 17:32:10.485000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316539363865363131303764333664636335396463613663323831 Jan 23 17:32:10.485000 audit: BPF prog-id=166 op=UNLOAD Jan 23 17:32:10.485000 audit[3870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316539363865363131303764333664636335396463613663323831 Jan 23 17:32:10.485000 audit: BPF prog-id=167 op=LOAD Jan 23 17:32:10.485000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316539363865363131303764333664636335396463613663323831 Jan 23 17:32:10.485000 audit: BPF prog-id=168 op=LOAD Jan 23 17:32:10.485000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316539363865363131303764333664636335396463613663323831 Jan 23 17:32:10.485000 audit: BPF prog-id=168 op=UNLOAD Jan 23 17:32:10.485000 audit[3870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316539363865363131303764333664636335396463613663323831 Jan 23 17:32:10.486000 audit: BPF prog-id=167 op=UNLOAD Jan 23 17:32:10.486000 audit[3870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316539363865363131303764333664636335396463613663323831 Jan 23 17:32:10.486000 audit: BPF prog-id=169 op=LOAD Jan 23 17:32:10.486000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3859 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361316539363865363131303764333664636335396463613663323831 Jan 23 17:32:10.515307 containerd[2084]: time="2026-01-23T17:32:10.515268250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2ghsd,Uid:e228b3fe-6115-4d9c-ac94-4a2b51328303,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3a1e968e61107d36dcc59dca6c281cd74c187d3710e0b8f8a132732d0b22c820\"" Jan 23 17:32:10.517515 containerd[2084]: time="2026-01-23T17:32:10.517401220Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 17:32:10.550000 audit[3895]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.550000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd958f770 a2=0 a3=1 items=0 ppid=3791 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 17:32:10.553000 audit[3897]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.553000 audit[3897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc9862800 a2=0 a3=1 items=0 ppid=3791 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 23 17:32:10.556000 audit[3900]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.556000 audit[3900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe3a74740 a2=0 a3=1 items=0 ppid=3791 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.556000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 23 17:32:10.557000 audit[3901]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.557000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2550760 a2=0 a3=1 items=0 ppid=3791 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 17:32:10.559000 audit[3903]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.559000 audit[3903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdc379380 a2=0 a3=1 items=0 ppid=3791 pid=3903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 17:32:10.560000 audit[3904]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.560000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0903880 a2=0 a3=1 items=0 ppid=3791 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 17:32:10.563000 audit[3906]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.563000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcadb3010 a2=0 a3=1 items=0 ppid=3791 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:10.566000 audit[3909]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.566000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff7ccda50 a2=0 a3=1 items=0 ppid=3791 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:10.567000 audit[3910]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.567000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde2846f0 a2=0 a3=1 items=0 ppid=3791 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 17:32:10.569000 audit[3912]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.569000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd19e8830 a2=0 a3=1 items=0 ppid=3791 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 17:32:10.570000 audit[3913]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.570000 audit[3913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4ffe4c0 a2=0 a3=1 items=0 ppid=3791 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.570000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 17:32:10.572000 audit[3915]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.572000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec5b31b0 a2=0 a3=1 items=0 ppid=3791 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 23 17:32:10.575000 audit[3918]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.575000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe1efb290 a2=0 a3=1 items=0 ppid=3791 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 23 17:32:10.579000 audit[3921]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.579000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe1a65a60 a2=0 a3=1 items=0 ppid=3791 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 23 17:32:10.580000 audit[3922]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.580000 audit[3922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffed3beab0 a2=0 a3=1 items=0 ppid=3791 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 17:32:10.582000 audit[3924]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.582000 audit[3924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffede0e850 a2=0 a3=1 items=0 ppid=3791 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:10.586000 audit[3927]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.586000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff3c50240 a2=0 a3=1 items=0 ppid=3791 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:10.587000 audit[3928]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.587000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc081e000 a2=0 a3=1 items=0 ppid=3791 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 17:32:10.590000 audit[3930]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 23 17:32:10.590000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd229c6d0 a2=0 a3=1 items=0 ppid=3791 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 17:32:10.706000 audit[3936]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:10.706000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcad67c60 a2=0 a3=1 items=0 ppid=3791 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:10.713000 audit[3936]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:10.713000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffcad67c60 a2=0 a3=1 items=0 ppid=3791 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:10.715000 audit[3941]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.715000 audit[3941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff429ec70 a2=0 a3=1 items=0 ppid=3791 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 23 17:32:10.718000 audit[3943]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.718000 audit[3943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdd2b6660 a2=0 a3=1 items=0 ppid=3791 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.718000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 23 17:32:10.721000 audit[3946]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.721000 audit[3946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe39ff3e0 a2=0 a3=1 items=0 ppid=3791 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 23 17:32:10.722000 audit[3947]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.722000 audit[3947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe25f2bc0 a2=0 a3=1 items=0 ppid=3791 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.722000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 23 17:32:10.724000 audit[3949]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.724000 audit[3949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffff2b46f0 a2=0 a3=1 items=0 ppid=3791 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.724000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 23 17:32:10.725000 audit[3950]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.725000 audit[3950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfe5aa70 a2=0 a3=1 items=0 ppid=3791 pid=3950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.725000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 23 17:32:10.727000 audit[3952]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.727000 audit[3952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd4c169c0 a2=0 a3=1 items=0 ppid=3791 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:10.730000 audit[3955]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.730000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc8f1eae0 a2=0 a3=1 items=0 ppid=3791 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.730000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:10.733000 audit[3956]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.733000 audit[3956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd35cce70 a2=0 a3=1 items=0 ppid=3791 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.733000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 23 17:32:10.735000 audit[3958]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.735000 audit[3958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4aaa6a0 a2=0 a3=1 items=0 ppid=3791 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.735000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 23 17:32:10.736000 audit[3959]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.736000 audit[3959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4db6c20 a2=0 a3=1 items=0 ppid=3791 pid=3959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.736000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 23 17:32:10.738000 audit[3961]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.738000 audit[3961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7ed4e00 a2=0 a3=1 items=0 ppid=3791 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.738000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 23 17:32:10.741000 audit[3964]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.741000 audit[3964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe3c2a6f0 a2=0 a3=1 items=0 ppid=3791 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 23 17:32:10.744000 audit[3967]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.744000 audit[3967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc6790f50 a2=0 a3=1 items=0 ppid=3791 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 23 17:32:10.745000 audit[3968]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.745000 audit[3968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd523bb90 a2=0 a3=1 items=0 ppid=3791 pid=3968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.745000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 23 17:32:10.748000 audit[3970]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.748000 audit[3970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff8cd3a90 a2=0 a3=1 items=0 ppid=3791 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.748000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:10.752000 audit[3973]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.752000 audit[3973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe28a47b0 a2=0 a3=1 items=0 ppid=3791 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.752000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 23 17:32:10.753000 audit[3974]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.753000 audit[3974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd694e470 a2=0 a3=1 items=0 ppid=3791 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 23 17:32:10.755000 audit[3976]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.755000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffee1ae520 a2=0 a3=1 items=0 ppid=3791 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 23 17:32:10.756000 audit[3977]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3977 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.756000 audit[3977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff48b5c00 a2=0 a3=1 items=0 ppid=3791 pid=3977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 23 17:32:10.758000 audit[3979]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.758000 audit[3979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffd8195c0 a2=0 a3=1 items=0 ppid=3791 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:32:10.761000 audit[3982]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 23 17:32:10.761000 audit[3982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe0c53cf0 a2=0 a3=1 items=0 ppid=3791 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 23 17:32:10.764000 audit[3984]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 17:32:10.764000 audit[3984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff8505630 a2=0 a3=1 items=0 ppid=3791 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.764000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:10.765000 audit[3984]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 23 17:32:10.765000 audit[3984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff8505630 a2=0 a3=1 items=0 ppid=3791 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:10.765000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:12.241684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270599128.mount: Deactivated successfully. Jan 23 17:32:12.711435 kubelet[3685]: I0123 17:32:12.711369 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b75b4" podStartSLOduration=3.711353907 podStartE2EDuration="3.711353907s" podCreationTimestamp="2026-01-23 17:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:10.536130175 +0000 UTC m=+6.165676470" watchObservedRunningTime="2026-01-23 17:32:12.711353907 +0000 UTC m=+8.340900194" Jan 23 17:32:12.714920 containerd[2084]: time="2026-01-23T17:32:12.714869596Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:12.718365 containerd[2084]: time="2026-01-23T17:32:12.718307120Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 23 17:32:12.721081 containerd[2084]: time="2026-01-23T17:32:12.721048153Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:12.725533 containerd[2084]: time="2026-01-23T17:32:12.725019712Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:12.725533 containerd[2084]: time="2026-01-23T17:32:12.725400267Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.20794894s" Jan 23 17:32:12.725533 containerd[2084]: time="2026-01-23T17:32:12.725423580Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 17:32:12.732513 containerd[2084]: time="2026-01-23T17:32:12.732470485Z" level=info msg="CreateContainer within sandbox \"3a1e968e61107d36dcc59dca6c281cd74c187d3710e0b8f8a132732d0b22c820\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 17:32:12.747340 containerd[2084]: time="2026-01-23T17:32:12.746920530Z" level=info msg="Container 2846f8a89d9363092a2e85ccbd3701d13242755da74de9e3ba661d66bd0f2b44: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:12.749032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300859285.mount: Deactivated successfully. Jan 23 17:32:12.766120 containerd[2084]: time="2026-01-23T17:32:12.766074385Z" level=info msg="CreateContainer within sandbox \"3a1e968e61107d36dcc59dca6c281cd74c187d3710e0b8f8a132732d0b22c820\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2846f8a89d9363092a2e85ccbd3701d13242755da74de9e3ba661d66bd0f2b44\"" Jan 23 17:32:12.767148 containerd[2084]: time="2026-01-23T17:32:12.767102037Z" level=info msg="StartContainer for \"2846f8a89d9363092a2e85ccbd3701d13242755da74de9e3ba661d66bd0f2b44\"" Jan 23 17:32:12.768019 containerd[2084]: time="2026-01-23T17:32:12.767989241Z" level=info msg="connecting to shim 2846f8a89d9363092a2e85ccbd3701d13242755da74de9e3ba661d66bd0f2b44" address="unix:///run/containerd/s/064e077980e7577f37be3a0fa4b74373323fc918d21984cc6d45521858029c99" protocol=ttrpc version=3 Jan 23 17:32:12.787133 systemd[1]: Started cri-containerd-2846f8a89d9363092a2e85ccbd3701d13242755da74de9e3ba661d66bd0f2b44.scope - libcontainer container 2846f8a89d9363092a2e85ccbd3701d13242755da74de9e3ba661d66bd0f2b44. Jan 23 17:32:12.797000 audit: BPF prog-id=170 op=LOAD Jan 23 17:32:12.797000 audit: BPF prog-id=171 op=LOAD Jan 23 17:32:12.797000 audit[3993]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=3859 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:12.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343666386138396439333633303932613265383563636264333730 Jan 23 17:32:12.797000 audit: BPF prog-id=171 op=UNLOAD Jan 23 17:32:12.797000 audit[3993]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:12.797000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343666386138396439333633303932613265383563636264333730 Jan 23 17:32:12.798000 audit: BPF prog-id=172 op=LOAD Jan 23 17:32:12.798000 audit[3993]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=3859 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:12.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343666386138396439333633303932613265383563636264333730 Jan 23 17:32:12.798000 audit: BPF prog-id=173 op=LOAD Jan 23 17:32:12.798000 audit[3993]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=3859 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:12.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343666386138396439333633303932613265383563636264333730 Jan 23 17:32:12.798000 audit: BPF prog-id=173 op=UNLOAD Jan 23 17:32:12.798000 audit[3993]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:12.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343666386138396439333633303932613265383563636264333730 Jan 23 17:32:12.798000 audit: BPF prog-id=172 op=UNLOAD Jan 23 17:32:12.798000 audit[3993]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3859 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:12.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343666386138396439333633303932613265383563636264333730 Jan 23 17:32:12.798000 audit: BPF prog-id=174 op=LOAD Jan 23 17:32:12.798000 audit[3993]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=3859 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:12.798000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238343666386138396439333633303932613265383563636264333730 Jan 23 17:32:12.820338 containerd[2084]: time="2026-01-23T17:32:12.820252132Z" level=info msg="StartContainer for \"2846f8a89d9363092a2e85ccbd3701d13242755da74de9e3ba661d66bd0f2b44\" returns successfully" Jan 23 17:32:17.003895 waagent[2315]: 2026-01-23T17:32:17.003741Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 17:32:17.012695 waagent[2315]: 2026-01-23T17:32:17.012646Z INFO ExtHandler Jan 23 17:32:17.012751 waagent[2315]: 2026-01-23T17:32:17.012732Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 17:32:17.017008 waagent[2315]: 2026-01-23T17:32:17.016975Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 17:32:17.074099 waagent[2315]: 2026-01-23T17:32:17.074020Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C7F04735D9384968A49E39E4BBE5448748E32C69', 'hasPrivateKey': True} Jan 23 17:32:17.074512 waagent[2315]: 2026-01-23T17:32:17.074476Z INFO ExtHandler Fetch goal state completed Jan 23 17:32:17.074814 waagent[2315]: 2026-01-23T17:32:17.074786Z INFO ExtHandler ExtHandler Jan 23 17:32:17.074894 waagent[2315]: 2026-01-23T17:32:17.074873Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: e4a4ab8e-8cde-4a75-bf6e-fbbd96fe6123 correlation f48e16ea-b654-413b-bec9-89289907a1d6 created: 2026-01-23T17:32:09.979866Z] Jan 23 17:32:17.075131 waagent[2315]: 2026-01-23T17:32:17.075105Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 17:32:17.075559 waagent[2315]: 2026-01-23T17:32:17.075529Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 17:32:18.000090 sudo[2582]: pam_unix(sudo:session): session closed for user root Jan 23 17:32:17.999000 audit[2582]: USER_END pid=2582 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:18.003743 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 23 17:32:18.003837 kernel: audit: type=1106 audit(1769189537.999:551): pid=2582 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:17.999000 audit[2582]: CRED_DISP pid=2582 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:18.033989 kernel: audit: type=1104 audit(1769189537.999:552): pid=2582 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 23 17:32:18.079053 sshd[2581]: Connection closed by 10.200.16.10 port 47550 Jan 23 17:32:18.079826 sshd-session[2577]: pam_unix(sshd:session): session closed for user core Jan 23 17:32:18.080000 audit[2577]: USER_END pid=2577 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:32:18.083192 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:32:18.084952 systemd[1]: session-10.scope: Consumed 4.042s CPU time, 219.8M memory peak. Jan 23 17:32:18.086755 systemd[1]: sshd@6-10.200.20.22:22-10.200.16.10:47550.service: Deactivated successfully. Jan 23 17:32:18.102334 systemd-logind[2051]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:32:18.104485 systemd-logind[2051]: Removed session 10. Jan 23 17:32:18.080000 audit[2577]: CRED_DISP pid=2577 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:32:18.118472 kernel: audit: type=1106 audit(1769189538.080:553): pid=2577 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:32:18.118548 kernel: audit: type=1104 audit(1769189538.080:554): pid=2577 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:32:18.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.22:22-10.200.16.10:47550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:18.133663 kernel: audit: type=1131 audit(1769189538.086:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.22:22-10.200.16.10:47550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:32:19.862000 audit[4074]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:19.879954 kernel: audit: type=1325 audit(1769189539.862:556): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:19.862000 audit[4074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc8be19d0 a2=0 a3=1 items=0 ppid=3791 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:19.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:19.959952 kernel: audit: type=1300 audit(1769189539.862:556): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc8be19d0 a2=0 a3=1 items=0 ppid=3791 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:19.960091 kernel: audit: type=1327 audit(1769189539.862:556): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:19.927000 audit[4074]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:19.970531 kernel: audit: type=1325 audit(1769189539.927:557): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4074 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:19.927000 audit[4074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc8be19d0 a2=0 a3=1 items=0 ppid=3791 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:19.989860 kernel: audit: type=1300 audit(1769189539.927:557): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc8be19d0 a2=0 a3=1 items=0 ppid=3791 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:19.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:20.997000 audit[4076]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:20.997000 audit[4076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff17a3fd0 a2=0 a3=1 items=0 ppid=3791 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:20.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:20.999000 audit[4076]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:20.999000 audit[4076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff17a3fd0 a2=0 a3=1 items=0 ppid=3791 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:20.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:22.895000 audit[4078]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:22.895000 audit[4078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffbe521d0 a2=0 a3=1 items=0 ppid=3791 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:22.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:22.904000 audit[4078]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:22.904000 audit[4078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffbe521d0 a2=0 a3=1 items=0 ppid=3791 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:22.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:23.102486 waagent[2315]: 2026-01-23T17:32:23.102431Z INFO ExtHandler Jan 23 17:32:23.102816 waagent[2315]: 2026-01-23T17:32:23.102558Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ac533e2e-cb70-477a-a90e-85296adc97f7 eTag: 14457938334497230284 source: Fabric] Jan 23 17:32:23.102924 waagent[2315]: 2026-01-23T17:32:23.102892Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 17:32:23.915000 audit[4080]: NETFILTER_CFG table=filter:114 family=2 entries=19 op=nft_register_rule pid=4080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:23.919917 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 23 17:32:23.920010 kernel: audit: type=1325 audit(1769189543.915:562): table=filter:114 family=2 entries=19 op=nft_register_rule pid=4080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:23.950928 kernel: audit: type=1300 audit(1769189543.915:562): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcc9ad640 a2=0 a3=1 items=0 ppid=3791 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:23.915000 audit[4080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcc9ad640 a2=0 a3=1 items=0 ppid=3791 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:23.915000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:23.961668 kernel: audit: type=1327 audit(1769189543.915:562): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:23.951000 audit[4080]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:23.976489 kernel: audit: type=1325 audit(1769189543.951:563): table=nat:115 family=2 entries=12 op=nft_register_rule pid=4080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:23.951000 audit[4080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcc9ad640 a2=0 a3=1 items=0 ppid=3791 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:23.999785 kernel: audit: type=1300 audit(1769189543.951:563): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcc9ad640 a2=0 a3=1 items=0 ppid=3791 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:23.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:24.010527 kernel: audit: type=1327 audit(1769189543.951:563): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:24.702320 kubelet[3685]: I0123 17:32:24.702264 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-2ghsd" podStartSLOduration=12.492849114 podStartE2EDuration="14.702250391s" podCreationTimestamp="2026-01-23 17:32:10 +0000 UTC" firstStartedPulling="2026-01-23 17:32:10.517040602 +0000 UTC m=+6.146586889" lastFinishedPulling="2026-01-23 17:32:12.726441879 +0000 UTC m=+8.355988166" observedRunningTime="2026-01-23 17:32:13.538604336 +0000 UTC m=+9.168150631" watchObservedRunningTime="2026-01-23 17:32:24.702250391 +0000 UTC m=+20.331796678" Jan 23 17:32:24.715485 systemd[1]: Created slice kubepods-besteffort-podd21b3d93_6090_497f_ae8c_6e41a95804eb.slice - libcontainer container kubepods-besteffort-podd21b3d93_6090_497f_ae8c_6e41a95804eb.slice. Jan 23 17:32:24.882424 kubelet[3685]: I0123 17:32:24.882376 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br2mh\" (UniqueName: \"kubernetes.io/projected/d21b3d93-6090-497f-ae8c-6e41a95804eb-kube-api-access-br2mh\") pod \"calico-typha-6cfccf8478-x8xpq\" (UID: \"d21b3d93-6090-497f-ae8c-6e41a95804eb\") " pod="calico-system/calico-typha-6cfccf8478-x8xpq" Jan 23 17:32:24.884104 kubelet[3685]: I0123 17:32:24.882600 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d21b3d93-6090-497f-ae8c-6e41a95804eb-tigera-ca-bundle\") pod \"calico-typha-6cfccf8478-x8xpq\" (UID: \"d21b3d93-6090-497f-ae8c-6e41a95804eb\") " pod="calico-system/calico-typha-6cfccf8478-x8xpq" Jan 23 17:32:24.884104 kubelet[3685]: I0123 17:32:24.882622 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d21b3d93-6090-497f-ae8c-6e41a95804eb-typha-certs\") pod \"calico-typha-6cfccf8478-x8xpq\" (UID: \"d21b3d93-6090-497f-ae8c-6e41a95804eb\") " pod="calico-system/calico-typha-6cfccf8478-x8xpq" Jan 23 17:32:24.894113 systemd[1]: Created slice kubepods-besteffort-pod8370f207_6c42_4dbb_984e_760918e95ece.slice - libcontainer container kubepods-besteffort-pod8370f207_6c42_4dbb_984e_760918e95ece.slice. Jan 23 17:32:24.984000 audit[4083]: NETFILTER_CFG table=filter:116 family=2 entries=21 op=nft_register_rule pid=4083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:24.984000 audit[4083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe03e0100 a2=0 a3=1 items=0 ppid=3791 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.013972 kernel: audit: type=1325 audit(1769189544.984:564): table=filter:116 family=2 entries=21 op=nft_register_rule pid=4083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:25.014097 kernel: audit: type=1300 audit(1769189544.984:564): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe03e0100 a2=0 a3=1 items=0 ppid=3791 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:24.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:25.023154 kernel: audit: type=1327 audit(1769189544.984:564): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:25.015000 audit[4083]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:25.033801 kernel: audit: type=1325 audit(1769189545.015:565): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:25.015000 audit[4083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe03e0100 a2=0 a3=1 items=0 ppid=3791 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.015000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:25.037426 containerd[2084]: time="2026-01-23T17:32:25.037388737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cfccf8478-x8xpq,Uid:d21b3d93-6090-497f-ae8c-6e41a95804eb,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:25.084248 kubelet[3685]: I0123 17:32:25.084209 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-flexvol-driver-host\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.084248 kubelet[3685]: I0123 17:32:25.084244 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-policysync\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.084248 kubelet[3685]: I0123 17:32:25.084259 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-var-run-calico\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.084989 kubelet[3685]: I0123 17:32:25.084295 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-cni-log-dir\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.084989 kubelet[3685]: I0123 17:32:25.084307 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-var-lib-calico\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.084989 kubelet[3685]: I0123 17:32:25.084318 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-lib-modules\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.084989 kubelet[3685]: I0123 17:32:25.084328 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8370f207-6c42-4dbb-984e-760918e95ece-tigera-ca-bundle\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.084989 kubelet[3685]: I0123 17:32:25.084340 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-xtables-lock\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.085081 kubelet[3685]: I0123 17:32:25.084378 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h485r\" (UniqueName: \"kubernetes.io/projected/8370f207-6c42-4dbb-984e-760918e95ece-kube-api-access-h485r\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.085081 kubelet[3685]: I0123 17:32:25.084441 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-cni-net-dir\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.085081 kubelet[3685]: I0123 17:32:25.084464 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8370f207-6c42-4dbb-984e-760918e95ece-node-certs\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.085789 kubelet[3685]: I0123 17:32:25.085455 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8370f207-6c42-4dbb-984e-760918e95ece-cni-bin-dir\") pod \"calico-node-x8jq4\" (UID: \"8370f207-6c42-4dbb-984e-760918e95ece\") " pod="calico-system/calico-node-x8jq4" Jan 23 17:32:25.087202 containerd[2084]: time="2026-01-23T17:32:25.086273729Z" level=info msg="connecting to shim 6fa86857858a0efaa9fed29541b2009890e01cc80aa6a759610e5195fe325878" address="unix:///run/containerd/s/c0dfc556ceb20703b804cbb459059f52547e466f10dbbf3a921b8c57e12fb3cc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:25.110609 kubelet[3685]: E0123 17:32:25.109638 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:25.115089 systemd[1]: Started cri-containerd-6fa86857858a0efaa9fed29541b2009890e01cc80aa6a759610e5195fe325878.scope - libcontainer container 6fa86857858a0efaa9fed29541b2009890e01cc80aa6a759610e5195fe325878. Jan 23 17:32:25.134000 audit: BPF prog-id=175 op=LOAD Jan 23 17:32:25.135000 audit: BPF prog-id=176 op=LOAD Jan 23 17:32:25.135000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4095 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666613836383537383538613065666161396665643239353431623230 Jan 23 17:32:25.135000 audit: BPF prog-id=176 op=UNLOAD Jan 23 17:32:25.135000 audit[4105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4095 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666613836383537383538613065666161396665643239353431623230 Jan 23 17:32:25.135000 audit: BPF prog-id=177 op=LOAD Jan 23 17:32:25.135000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4095 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666613836383537383538613065666161396665643239353431623230 Jan 23 17:32:25.135000 audit: BPF prog-id=178 op=LOAD Jan 23 17:32:25.135000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4095 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666613836383537383538613065666161396665643239353431623230 Jan 23 17:32:25.135000 audit: BPF prog-id=178 op=UNLOAD Jan 23 17:32:25.135000 audit[4105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4095 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666613836383537383538613065666161396665643239353431623230 Jan 23 17:32:25.135000 audit: BPF prog-id=177 op=UNLOAD Jan 23 17:32:25.135000 audit[4105]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4095 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666613836383537383538613065666161396665643239353431623230 Jan 23 17:32:25.136000 audit: BPF prog-id=179 op=LOAD Jan 23 17:32:25.136000 audit[4105]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4095 pid=4105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666613836383537383538613065666161396665643239353431623230 Jan 23 17:32:25.168604 containerd[2084]: time="2026-01-23T17:32:25.168387976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cfccf8478-x8xpq,Uid:d21b3d93-6090-497f-ae8c-6e41a95804eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fa86857858a0efaa9fed29541b2009890e01cc80aa6a759610e5195fe325878\"" Jan 23 17:32:25.171776 containerd[2084]: time="2026-01-23T17:32:25.171660232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 17:32:25.186149 kubelet[3685]: I0123 17:32:25.185704 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bea6d6d6-6443-4534-ac1b-26cecad019a7-socket-dir\") pod \"csi-node-driver-v8t56\" (UID: \"bea6d6d6-6443-4534-ac1b-26cecad019a7\") " pod="calico-system/csi-node-driver-v8t56" Jan 23 17:32:25.186475 kubelet[3685]: I0123 17:32:25.186367 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bea6d6d6-6443-4534-ac1b-26cecad019a7-kubelet-dir\") pod \"csi-node-driver-v8t56\" (UID: \"bea6d6d6-6443-4534-ac1b-26cecad019a7\") " pod="calico-system/csi-node-driver-v8t56" Jan 23 17:32:25.186875 kubelet[3685]: I0123 17:32:25.186537 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq5gl\" (UniqueName: \"kubernetes.io/projected/bea6d6d6-6443-4534-ac1b-26cecad019a7-kube-api-access-tq5gl\") pod \"csi-node-driver-v8t56\" (UID: \"bea6d6d6-6443-4534-ac1b-26cecad019a7\") " pod="calico-system/csi-node-driver-v8t56" Jan 23 17:32:25.186875 kubelet[3685]: I0123 17:32:25.186574 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bea6d6d6-6443-4534-ac1b-26cecad019a7-registration-dir\") pod \"csi-node-driver-v8t56\" (UID: \"bea6d6d6-6443-4534-ac1b-26cecad019a7\") " pod="calico-system/csi-node-driver-v8t56" Jan 23 17:32:25.186875 kubelet[3685]: I0123 17:32:25.186618 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bea6d6d6-6443-4534-ac1b-26cecad019a7-varrun\") pod \"csi-node-driver-v8t56\" (UID: \"bea6d6d6-6443-4534-ac1b-26cecad019a7\") " pod="calico-system/csi-node-driver-v8t56" Jan 23 17:32:25.190487 kubelet[3685]: E0123 17:32:25.190465 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.190609 kubelet[3685]: W0123 17:32:25.190595 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.190674 kubelet[3685]: E0123 17:32:25.190654 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.190931 kubelet[3685]: E0123 17:32:25.190881 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.190931 kubelet[3685]: W0123 17:32:25.190891 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.190931 kubelet[3685]: E0123 17:32:25.190901 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.191181 kubelet[3685]: E0123 17:32:25.191154 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.191181 kubelet[3685]: W0123 17:32:25.191166 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.191326 kubelet[3685]: E0123 17:32:25.191254 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.191505 kubelet[3685]: E0123 17:32:25.191494 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.191579 kubelet[3685]: W0123 17:32:25.191569 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.191637 kubelet[3685]: E0123 17:32:25.191621 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.197329 kubelet[3685]: E0123 17:32:25.197260 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.197329 kubelet[3685]: W0123 17:32:25.197278 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.197329 kubelet[3685]: E0123 17:32:25.197294 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.209436 kubelet[3685]: E0123 17:32:25.209356 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.209436 kubelet[3685]: W0123 17:32:25.209377 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.209436 kubelet[3685]: E0123 17:32:25.209399 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.288324 kubelet[3685]: E0123 17:32:25.288120 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.288324 kubelet[3685]: W0123 17:32:25.288147 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.288324 kubelet[3685]: E0123 17:32:25.288167 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.288967 kubelet[3685]: E0123 17:32:25.288941 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.288967 kubelet[3685]: W0123 17:32:25.288962 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.289118 kubelet[3685]: E0123 17:32:25.288976 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.289168 kubelet[3685]: E0123 17:32:25.289150 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.289168 kubelet[3685]: W0123 17:32:25.289158 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.289168 kubelet[3685]: E0123 17:32:25.289165 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.289605 kubelet[3685]: E0123 17:32:25.289587 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.289605 kubelet[3685]: W0123 17:32:25.289602 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.289605 kubelet[3685]: E0123 17:32:25.289612 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.289790 kubelet[3685]: E0123 17:32:25.289772 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.289790 kubelet[3685]: W0123 17:32:25.289783 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.289790 kubelet[3685]: E0123 17:32:25.289790 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.290032 kubelet[3685]: E0123 17:32:25.290014 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.290032 kubelet[3685]: W0123 17:32:25.290028 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.290077 kubelet[3685]: E0123 17:32:25.290036 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.290279 kubelet[3685]: E0123 17:32:25.290145 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.290279 kubelet[3685]: W0123 17:32:25.290155 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.290279 kubelet[3685]: E0123 17:32:25.290163 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.290419 kubelet[3685]: E0123 17:32:25.290402 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.290565 kubelet[3685]: W0123 17:32:25.290461 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.290565 kubelet[3685]: E0123 17:32:25.290478 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.290687 kubelet[3685]: E0123 17:32:25.290676 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.290752 kubelet[3685]: W0123 17:32:25.290729 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.290916 kubelet[3685]: E0123 17:32:25.290796 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.291029 kubelet[3685]: E0123 17:32:25.291017 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.291189 kubelet[3685]: W0123 17:32:25.291073 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.291189 kubelet[3685]: E0123 17:32:25.291088 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.291304 kubelet[3685]: E0123 17:32:25.291293 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.291353 kubelet[3685]: W0123 17:32:25.291344 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.291402 kubelet[3685]: E0123 17:32:25.291391 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.291674 kubelet[3685]: E0123 17:32:25.291580 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.291674 kubelet[3685]: W0123 17:32:25.291591 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.291674 kubelet[3685]: E0123 17:32:25.291599 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.291813 kubelet[3685]: E0123 17:32:25.291801 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.291888 kubelet[3685]: W0123 17:32:25.291879 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.291951 kubelet[3685]: E0123 17:32:25.291937 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.292246 kubelet[3685]: E0123 17:32:25.292135 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.292246 kubelet[3685]: W0123 17:32:25.292146 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.292246 kubelet[3685]: E0123 17:32:25.292154 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.292382 kubelet[3685]: E0123 17:32:25.292371 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.292432 kubelet[3685]: W0123 17:32:25.292422 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.292481 kubelet[3685]: E0123 17:32:25.292469 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.292674 kubelet[3685]: E0123 17:32:25.292664 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.292807 kubelet[3685]: W0123 17:32:25.292727 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.292807 kubelet[3685]: E0123 17:32:25.292741 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.293911 kubelet[3685]: E0123 17:32:25.293317 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.293911 kubelet[3685]: W0123 17:32:25.293332 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.293911 kubelet[3685]: E0123 17:32:25.293343 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.294474 kubelet[3685]: E0123 17:32:25.294452 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.294474 kubelet[3685]: W0123 17:32:25.294470 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.294767 kubelet[3685]: E0123 17:32:25.294485 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.294767 kubelet[3685]: E0123 17:32:25.294655 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.294767 kubelet[3685]: W0123 17:32:25.294661 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.294767 kubelet[3685]: E0123 17:32:25.294669 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.294767 kubelet[3685]: E0123 17:32:25.294778 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.294767 kubelet[3685]: W0123 17:32:25.294784 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.295393 kubelet[3685]: E0123 17:32:25.294790 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.295393 kubelet[3685]: E0123 17:32:25.294918 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.295393 kubelet[3685]: W0123 17:32:25.294924 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.295393 kubelet[3685]: E0123 17:32:25.294931 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.295393 kubelet[3685]: E0123 17:32:25.295024 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.295393 kubelet[3685]: W0123 17:32:25.295029 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.295393 kubelet[3685]: E0123 17:32:25.295035 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.295393 kubelet[3685]: E0123 17:32:25.295108 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.295393 kubelet[3685]: W0123 17:32:25.295112 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.295393 kubelet[3685]: E0123 17:32:25.295116 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.295675 kubelet[3685]: E0123 17:32:25.295213 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.295675 kubelet[3685]: W0123 17:32:25.295218 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.295675 kubelet[3685]: E0123 17:32:25.295223 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.295675 kubelet[3685]: E0123 17:32:25.295333 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.295675 kubelet[3685]: W0123 17:32:25.295337 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.295675 kubelet[3685]: E0123 17:32:25.295343 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.302695 kubelet[3685]: E0123 17:32:25.302652 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:25.302695 kubelet[3685]: W0123 17:32:25.302675 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:25.302695 kubelet[3685]: E0123 17:32:25.302694 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:25.528891 containerd[2084]: time="2026-01-23T17:32:25.528803241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x8jq4,Uid:8370f207-6c42-4dbb-984e-760918e95ece,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:25.641244 containerd[2084]: time="2026-01-23T17:32:25.641175109Z" level=info msg="connecting to shim ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379" address="unix:///run/containerd/s/9392e13e552d87f195047f2e31f75d6bb6cbd248dc83dc2a407ea3be7c945cb7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:25.663072 systemd[1]: Started cri-containerd-ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379.scope - libcontainer container ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379. Jan 23 17:32:25.670000 audit: BPF prog-id=180 op=LOAD Jan 23 17:32:25.671000 audit: BPF prog-id=181 op=LOAD Jan 23 17:32:25.671000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4176 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316132386139326461306332663762653535663037666337643963 Jan 23 17:32:25.671000 audit: BPF prog-id=181 op=UNLOAD Jan 23 17:32:25.671000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316132386139326461306332663762653535663037666337643963 Jan 23 17:32:25.671000 audit: BPF prog-id=182 op=LOAD Jan 23 17:32:25.671000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4176 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316132386139326461306332663762653535663037666337643963 Jan 23 17:32:25.671000 audit: BPF prog-id=183 op=LOAD Jan 23 17:32:25.671000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4176 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316132386139326461306332663762653535663037666337643963 Jan 23 17:32:25.671000 audit: BPF prog-id=183 op=UNLOAD Jan 23 17:32:25.671000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316132386139326461306332663762653535663037666337643963 Jan 23 17:32:25.671000 audit: BPF prog-id=182 op=UNLOAD Jan 23 17:32:25.671000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316132386139326461306332663762653535663037666337643963 Jan 23 17:32:25.671000 audit: BPF prog-id=184 op=LOAD Jan 23 17:32:25.671000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4176 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:25.671000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365316132386139326461306332663762653535663037666337643963 Jan 23 17:32:25.689503 containerd[2084]: time="2026-01-23T17:32:25.689457324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x8jq4,Uid:8370f207-6c42-4dbb-984e-760918e95ece,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379\"" Jan 23 17:32:26.456392 kubelet[3685]: E0123 17:32:26.456286 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:26.611480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156956980.mount: Deactivated successfully. Jan 23 17:32:27.544463 containerd[2084]: time="2026-01-23T17:32:27.543953425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:27.546330 containerd[2084]: time="2026-01-23T17:32:27.546287138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 23 17:32:27.549019 containerd[2084]: time="2026-01-23T17:32:27.548994634Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:27.552488 containerd[2084]: time="2026-01-23T17:32:27.552447169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:27.552865 containerd[2084]: time="2026-01-23T17:32:27.552815432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.381074268s" Jan 23 17:32:27.552865 containerd[2084]: time="2026-01-23T17:32:27.552865538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 17:32:27.555524 containerd[2084]: time="2026-01-23T17:32:27.555478374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 17:32:27.570005 containerd[2084]: time="2026-01-23T17:32:27.569953301Z" level=info msg="CreateContainer within sandbox \"6fa86857858a0efaa9fed29541b2009890e01cc80aa6a759610e5195fe325878\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 17:32:27.589201 containerd[2084]: time="2026-01-23T17:32:27.589152448Z" level=info msg="Container 99eb46e55a1260d7e90b2e3cad2a15e7f6f3b55c0f23d1d90ea40cb5f18e1c4c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:27.604316 containerd[2084]: time="2026-01-23T17:32:27.604185095Z" level=info msg="CreateContainer within sandbox \"6fa86857858a0efaa9fed29541b2009890e01cc80aa6a759610e5195fe325878\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"99eb46e55a1260d7e90b2e3cad2a15e7f6f3b55c0f23d1d90ea40cb5f18e1c4c\"" Jan 23 17:32:27.605824 containerd[2084]: time="2026-01-23T17:32:27.604737093Z" level=info msg="StartContainer for \"99eb46e55a1260d7e90b2e3cad2a15e7f6f3b55c0f23d1d90ea40cb5f18e1c4c\"" Jan 23 17:32:27.606174 containerd[2084]: time="2026-01-23T17:32:27.606155136Z" level=info msg="connecting to shim 99eb46e55a1260d7e90b2e3cad2a15e7f6f3b55c0f23d1d90ea40cb5f18e1c4c" address="unix:///run/containerd/s/c0dfc556ceb20703b804cbb459059f52547e466f10dbbf3a921b8c57e12fb3cc" protocol=ttrpc version=3 Jan 23 17:32:27.625189 systemd[1]: Started cri-containerd-99eb46e55a1260d7e90b2e3cad2a15e7f6f3b55c0f23d1d90ea40cb5f18e1c4c.scope - libcontainer container 99eb46e55a1260d7e90b2e3cad2a15e7f6f3b55c0f23d1d90ea40cb5f18e1c4c. Jan 23 17:32:27.638000 audit: BPF prog-id=185 op=LOAD Jan 23 17:32:27.638000 audit: BPF prog-id=186 op=LOAD Jan 23 17:32:27.638000 audit[4222]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4095 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:27.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656234366535356131323630643765393062326533636164326131 Jan 23 17:32:27.638000 audit: BPF prog-id=186 op=UNLOAD Jan 23 17:32:27.638000 audit[4222]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4095 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:27.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656234366535356131323630643765393062326533636164326131 Jan 23 17:32:27.639000 audit: BPF prog-id=187 op=LOAD Jan 23 17:32:27.639000 audit[4222]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4095 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:27.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656234366535356131323630643765393062326533636164326131 Jan 23 17:32:27.639000 audit: BPF prog-id=188 op=LOAD Jan 23 17:32:27.639000 audit[4222]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4095 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:27.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656234366535356131323630643765393062326533636164326131 Jan 23 17:32:27.639000 audit: BPF prog-id=188 op=UNLOAD Jan 23 17:32:27.639000 audit[4222]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4095 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:27.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656234366535356131323630643765393062326533636164326131 Jan 23 17:32:27.639000 audit: BPF prog-id=187 op=UNLOAD Jan 23 17:32:27.639000 audit[4222]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4095 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:27.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656234366535356131323630643765393062326533636164326131 Jan 23 17:32:27.639000 audit: BPF prog-id=189 op=LOAD Jan 23 17:32:27.639000 audit[4222]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4095 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:27.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939656234366535356131323630643765393062326533636164326131 Jan 23 17:32:27.665935 containerd[2084]: time="2026-01-23T17:32:27.665893265Z" level=info msg="StartContainer for \"99eb46e55a1260d7e90b2e3cad2a15e7f6f3b55c0f23d1d90ea40cb5f18e1c4c\" returns successfully" Jan 23 17:32:28.456893 kubelet[3685]: E0123 17:32:28.456554 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:28.607756 kubelet[3685]: E0123 17:32:28.607710 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.607756 kubelet[3685]: W0123 17:32:28.607736 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.607756 kubelet[3685]: E0123 17:32:28.607757 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608026 kubelet[3685]: E0123 17:32:28.607897 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608026 kubelet[3685]: W0123 17:32:28.607903 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608026 kubelet[3685]: E0123 17:32:28.607933 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608152 kubelet[3685]: E0123 17:32:28.608037 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608152 kubelet[3685]: W0123 17:32:28.608043 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608152 kubelet[3685]: E0123 17:32:28.608049 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608152 kubelet[3685]: E0123 17:32:28.608145 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608152 kubelet[3685]: W0123 17:32:28.608150 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608152 kubelet[3685]: E0123 17:32:28.608156 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608350 kubelet[3685]: E0123 17:32:28.608247 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608350 kubelet[3685]: W0123 17:32:28.608251 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608350 kubelet[3685]: E0123 17:32:28.608256 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608350 kubelet[3685]: E0123 17:32:28.608329 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608350 kubelet[3685]: W0123 17:32:28.608333 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608350 kubelet[3685]: E0123 17:32:28.608337 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608533 kubelet[3685]: E0123 17:32:28.608415 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608533 kubelet[3685]: W0123 17:32:28.608420 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608533 kubelet[3685]: E0123 17:32:28.608425 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608533 kubelet[3685]: E0123 17:32:28.608501 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608533 kubelet[3685]: W0123 17:32:28.608505 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608533 kubelet[3685]: E0123 17:32:28.608510 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608696 kubelet[3685]: E0123 17:32:28.608599 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608696 kubelet[3685]: W0123 17:32:28.608604 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608696 kubelet[3685]: E0123 17:32:28.608610 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.608784 kubelet[3685]: E0123 17:32:28.608766 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.608784 kubelet[3685]: W0123 17:32:28.608774 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.608784 kubelet[3685]: E0123 17:32:28.608781 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.609068 kubelet[3685]: E0123 17:32:28.608888 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.609068 kubelet[3685]: W0123 17:32:28.608896 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.609068 kubelet[3685]: E0123 17:32:28.608902 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.609068 kubelet[3685]: E0123 17:32:28.608982 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.609068 kubelet[3685]: W0123 17:32:28.608986 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.609068 kubelet[3685]: E0123 17:32:28.608991 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.609194 kubelet[3685]: E0123 17:32:28.609077 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.609194 kubelet[3685]: W0123 17:32:28.609081 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.609194 kubelet[3685]: E0123 17:32:28.609087 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.609194 kubelet[3685]: E0123 17:32:28.609157 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.609194 kubelet[3685]: W0123 17:32:28.609160 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.609194 kubelet[3685]: E0123 17:32:28.609164 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.609281 kubelet[3685]: E0123 17:32:28.609230 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.609281 kubelet[3685]: W0123 17:32:28.609234 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.609281 kubelet[3685]: E0123 17:32:28.609239 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.618297 kubelet[3685]: E0123 17:32:28.618219 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.618297 kubelet[3685]: W0123 17:32:28.618256 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.618297 kubelet[3685]: E0123 17:32:28.618273 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.618772 kubelet[3685]: E0123 17:32:28.618736 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.618772 kubelet[3685]: W0123 17:32:28.618749 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.618772 kubelet[3685]: E0123 17:32:28.618760 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.620558 kubelet[3685]: E0123 17:32:28.619132 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.620558 kubelet[3685]: W0123 17:32:28.619143 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.620558 kubelet[3685]: E0123 17:32:28.619153 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.620558 kubelet[3685]: E0123 17:32:28.619372 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.620558 kubelet[3685]: W0123 17:32:28.619381 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.620558 kubelet[3685]: E0123 17:32:28.619390 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.620558 kubelet[3685]: E0123 17:32:28.619563 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.620558 kubelet[3685]: W0123 17:32:28.619571 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.620558 kubelet[3685]: E0123 17:32:28.619579 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.620558 kubelet[3685]: E0123 17:32:28.619728 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.620708 kubelet[3685]: W0123 17:32:28.619735 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.620708 kubelet[3685]: E0123 17:32:28.619743 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.620708 kubelet[3685]: E0123 17:32:28.619898 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.620708 kubelet[3685]: W0123 17:32:28.619907 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.620708 kubelet[3685]: E0123 17:32:28.619916 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.620708 kubelet[3685]: E0123 17:32:28.620066 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.620708 kubelet[3685]: W0123 17:32:28.620074 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.620708 kubelet[3685]: E0123 17:32:28.620082 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.620708 kubelet[3685]: E0123 17:32:28.620233 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.620708 kubelet[3685]: W0123 17:32:28.620241 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.620867 kubelet[3685]: E0123 17:32:28.620264 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.620867 kubelet[3685]: E0123 17:32:28.620405 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.620867 kubelet[3685]: W0123 17:32:28.620412 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.620867 kubelet[3685]: E0123 17:32:28.620420 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.621426 kubelet[3685]: E0123 17:32:28.621101 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.621426 kubelet[3685]: W0123 17:32:28.621126 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.621426 kubelet[3685]: E0123 17:32:28.621137 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.621426 kubelet[3685]: E0123 17:32:28.621373 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.621426 kubelet[3685]: W0123 17:32:28.621384 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.621426 kubelet[3685]: E0123 17:32:28.621394 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.621926 kubelet[3685]: E0123 17:32:28.621913 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.622082 kubelet[3685]: W0123 17:32:28.621984 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.622082 kubelet[3685]: E0123 17:32:28.622000 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.622365 kubelet[3685]: E0123 17:32:28.622257 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.622365 kubelet[3685]: W0123 17:32:28.622268 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.622365 kubelet[3685]: E0123 17:32:28.622279 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.622541 kubelet[3685]: E0123 17:32:28.622532 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.622634 kubelet[3685]: W0123 17:32:28.622588 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.622634 kubelet[3685]: E0123 17:32:28.622600 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.622852 kubelet[3685]: E0123 17:32:28.622819 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.622852 kubelet[3685]: W0123 17:32:28.622829 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.622999 kubelet[3685]: E0123 17:32:28.622838 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.623189 kubelet[3685]: E0123 17:32:28.623157 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.623248 kubelet[3685]: W0123 17:32:28.623238 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.623295 kubelet[3685]: E0123 17:32:28.623285 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.623725 kubelet[3685]: E0123 17:32:28.623677 3685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:32:28.623725 kubelet[3685]: W0123 17:32:28.623690 3685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:32:28.623725 kubelet[3685]: E0123 17:32:28.623700 3685 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:32:28.856084 containerd[2084]: time="2026-01-23T17:32:28.855440029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:28.859480 containerd[2084]: time="2026-01-23T17:32:28.859427298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:28.862114 containerd[2084]: time="2026-01-23T17:32:28.862067229Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:28.866871 containerd[2084]: time="2026-01-23T17:32:28.866587373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:28.866871 containerd[2084]: time="2026-01-23T17:32:28.866786471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.311279184s" Jan 23 17:32:28.866871 containerd[2084]: time="2026-01-23T17:32:28.866814737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 17:32:28.874905 containerd[2084]: time="2026-01-23T17:32:28.874465252Z" level=info msg="CreateContainer within sandbox \"ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 17:32:28.894808 containerd[2084]: time="2026-01-23T17:32:28.893377134Z" level=info msg="Container 3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:28.911168 containerd[2084]: time="2026-01-23T17:32:28.911120158Z" level=info msg="CreateContainer within sandbox \"ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083\"" Jan 23 17:32:28.913077 containerd[2084]: time="2026-01-23T17:32:28.912051196Z" level=info msg="StartContainer for \"3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083\"" Jan 23 17:32:28.913595 containerd[2084]: time="2026-01-23T17:32:28.913567199Z" level=info msg="connecting to shim 3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083" address="unix:///run/containerd/s/9392e13e552d87f195047f2e31f75d6bb6cbd248dc83dc2a407ea3be7c945cb7" protocol=ttrpc version=3 Jan 23 17:32:28.938146 systemd[1]: Started cri-containerd-3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083.scope - libcontainer container 3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083. Jan 23 17:32:28.974000 audit: BPF prog-id=190 op=LOAD Jan 23 17:32:28.978054 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 23 17:32:28.978260 kernel: audit: type=1334 audit(1769189548.974:590): prog-id=190 op=LOAD Jan 23 17:32:28.974000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4176 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:28.998009 kernel: audit: type=1300 audit(1769189548.974:590): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4176 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:28.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343262633833376466336366663365616239323962323032323930 Jan 23 17:32:29.014197 kernel: audit: type=1327 audit(1769189548.974:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343262633833376466336366663365616239323962323032323930 Jan 23 17:32:28.976000 audit: BPF prog-id=191 op=LOAD Jan 23 17:32:29.018931 kernel: audit: type=1334 audit(1769189548.976:591): prog-id=191 op=LOAD Jan 23 17:32:28.976000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4176 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:29.034765 kernel: audit: type=1300 audit(1769189548.976:591): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4176 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:28.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343262633833376466336366663365616239323962323032323930 Jan 23 17:32:29.050799 kernel: audit: type=1327 audit(1769189548.976:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343262633833376466336366663365616239323962323032323930 Jan 23 17:32:28.977000 audit: BPF prog-id=191 op=UNLOAD Jan 23 17:32:29.056955 kernel: audit: type=1334 audit(1769189548.977:592): prog-id=191 op=UNLOAD Jan 23 17:32:28.977000 audit[4297]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:29.075498 kernel: audit: type=1300 audit(1769189548.977:592): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:28.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343262633833376466336366663365616239323962323032323930 Jan 23 17:32:29.091673 kernel: audit: type=1327 audit(1769189548.977:592): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343262633833376466336366663365616239323962323032323930 Jan 23 17:32:28.977000 audit: BPF prog-id=190 op=UNLOAD Jan 23 17:32:29.099261 kernel: audit: type=1334 audit(1769189548.977:593): prog-id=190 op=UNLOAD Jan 23 17:32:28.977000 audit[4297]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:28.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343262633833376466336366663365616239323962323032323930 Jan 23 17:32:28.977000 audit: BPF prog-id=192 op=LOAD Jan 23 17:32:28.977000 audit[4297]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4176 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:28.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364343262633833376466336366663365616239323962323032323930 Jan 23 17:32:29.119450 containerd[2084]: time="2026-01-23T17:32:29.118773816Z" level=info msg="StartContainer for \"3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083\" returns successfully" Jan 23 17:32:29.122792 systemd[1]: cri-containerd-3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083.scope: Deactivated successfully. Jan 23 17:32:29.126000 audit: BPF prog-id=192 op=UNLOAD Jan 23 17:32:29.129958 containerd[2084]: time="2026-01-23T17:32:29.129137866Z" level=info msg="received container exit event container_id:\"3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083\" id:\"3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083\" pid:4309 exited_at:{seconds:1769189549 nanos:128655362}" Jan 23 17:32:29.155079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d42bc837df3cff3eab929b202290171cf5abe37351a6ca9cc34e87fdc9b2083-rootfs.mount: Deactivated successfully. Jan 23 17:32:29.548068 kubelet[3685]: I0123 17:32:29.547939 3685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:32:29.564362 kubelet[3685]: I0123 17:32:29.564166 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cfccf8478-x8xpq" podStartSLOduration=3.181531146 podStartE2EDuration="5.564146271s" podCreationTimestamp="2026-01-23 17:32:24 +0000 UTC" firstStartedPulling="2026-01-23 17:32:25.171212661 +0000 UTC m=+20.800758948" lastFinishedPulling="2026-01-23 17:32:27.553827786 +0000 UTC m=+23.183374073" observedRunningTime="2026-01-23 17:32:28.559463815 +0000 UTC m=+24.189010142" watchObservedRunningTime="2026-01-23 17:32:29.564146271 +0000 UTC m=+25.193692598" Jan 23 17:32:30.456226 kubelet[3685]: E0123 17:32:30.456048 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:30.554180 containerd[2084]: time="2026-01-23T17:32:30.553962054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 17:32:32.456387 kubelet[3685]: E0123 17:32:32.456026 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:32.713954 containerd[2084]: time="2026-01-23T17:32:32.713778445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:32.717553 containerd[2084]: time="2026-01-23T17:32:32.717487945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 23 17:32:32.720422 containerd[2084]: time="2026-01-23T17:32:32.720331671Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:32.724626 containerd[2084]: time="2026-01-23T17:32:32.724564290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:32.725207 containerd[2084]: time="2026-01-23T17:32:32.725075448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.171072385s" Jan 23 17:32:32.725207 containerd[2084]: time="2026-01-23T17:32:32.725107586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 17:32:32.733532 containerd[2084]: time="2026-01-23T17:32:32.733429161Z" level=info msg="CreateContainer within sandbox \"ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:32:32.754972 containerd[2084]: time="2026-01-23T17:32:32.754914503Z" level=info msg="Container d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:32.757568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249083674.mount: Deactivated successfully. Jan 23 17:32:32.772457 containerd[2084]: time="2026-01-23T17:32:32.772405315Z" level=info msg="CreateContainer within sandbox \"ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b\"" Jan 23 17:32:32.773164 containerd[2084]: time="2026-01-23T17:32:32.773132851Z" level=info msg="StartContainer for \"d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b\"" Jan 23 17:32:32.775894 containerd[2084]: time="2026-01-23T17:32:32.775828458Z" level=info msg="connecting to shim d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b" address="unix:///run/containerd/s/9392e13e552d87f195047f2e31f75d6bb6cbd248dc83dc2a407ea3be7c945cb7" protocol=ttrpc version=3 Jan 23 17:32:32.795217 systemd[1]: Started cri-containerd-d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b.scope - libcontainer container d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b. Jan 23 17:32:32.833000 audit: BPF prog-id=193 op=LOAD Jan 23 17:32:32.833000 audit[4357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4176 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439613837393036396233313736346337323831633136636135393966 Jan 23 17:32:32.833000 audit: BPF prog-id=194 op=LOAD Jan 23 17:32:32.833000 audit[4357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4176 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439613837393036396233313736346337323831633136636135393966 Jan 23 17:32:32.833000 audit: BPF prog-id=194 op=UNLOAD Jan 23 17:32:32.833000 audit[4357]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439613837393036396233313736346337323831633136636135393966 Jan 23 17:32:32.833000 audit: BPF prog-id=193 op=UNLOAD Jan 23 17:32:32.833000 audit[4357]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439613837393036396233313736346337323831633136636135393966 Jan 23 17:32:32.833000 audit: BPF prog-id=195 op=LOAD Jan 23 17:32:32.833000 audit[4357]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4176 pid=4357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:32.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439613837393036396233313736346337323831633136636135393966 Jan 23 17:32:32.854610 containerd[2084]: time="2026-01-23T17:32:32.854567721Z" level=info msg="StartContainer for \"d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b\" returns successfully" Jan 23 17:32:33.985765 containerd[2084]: time="2026-01-23T17:32:33.985702488Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:32:33.988279 systemd[1]: cri-containerd-d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b.scope: Deactivated successfully. Jan 23 17:32:33.988782 systemd[1]: cri-containerd-d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b.scope: Consumed 354ms CPU time, 188.6M memory peak, 165.9M written to disk. Jan 23 17:32:33.990689 containerd[2084]: time="2026-01-23T17:32:33.990628441Z" level=info msg="received container exit event container_id:\"d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b\" id:\"d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b\" pid:4370 exited_at:{seconds:1769189553 nanos:990408512}" Jan 23 17:32:33.991000 audit: BPF prog-id=195 op=UNLOAD Jan 23 17:32:33.995346 kernel: kauditd_printk_skb: 21 callbacks suppressed Jan 23 17:32:33.995539 kernel: audit: type=1334 audit(1769189553.991:601): prog-id=195 op=UNLOAD Jan 23 17:32:34.012150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9a879069b31764c7281c16ca599f2a8c1ac88101bac70803167259a2d46531b-rootfs.mount: Deactivated successfully. Jan 23 17:32:34.064812 kubelet[3685]: I0123 17:32:34.064380 3685 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 17:32:34.856210 systemd[1]: Created slice kubepods-burstable-pod7fedd3f3_53a6_42e6_a84b_32923d7910c8.slice - libcontainer container kubepods-burstable-pod7fedd3f3_53a6_42e6_a84b_32923d7910c8.slice. Jan 23 17:32:34.861589 kubelet[3685]: I0123 17:32:34.856544 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fedd3f3-53a6-42e6-a84b-32923d7910c8-config-volume\") pod \"coredns-66bc5c9577-sf7ng\" (UID: \"7fedd3f3-53a6-42e6-a84b-32923d7910c8\") " pod="kube-system/coredns-66bc5c9577-sf7ng" Jan 23 17:32:34.861589 kubelet[3685]: I0123 17:32:34.856586 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2z7j\" (UniqueName: \"kubernetes.io/projected/7fedd3f3-53a6-42e6-a84b-32923d7910c8-kube-api-access-t2z7j\") pod \"coredns-66bc5c9577-sf7ng\" (UID: \"7fedd3f3-53a6-42e6-a84b-32923d7910c8\") " pod="kube-system/coredns-66bc5c9577-sf7ng" Jan 23 17:32:34.873164 systemd[1]: Created slice kubepods-besteffort-pod58fee7e2_7a02_433b_9bb7_4f5cf670cf10.slice - libcontainer container kubepods-besteffort-pod58fee7e2_7a02_433b_9bb7_4f5cf670cf10.slice. Jan 23 17:32:34.885761 systemd[1]: Created slice kubepods-besteffort-pod29070044_7a78_4c22_ba4e_b03de4973ab6.slice - libcontainer container kubepods-besteffort-pod29070044_7a78_4c22_ba4e_b03de4973ab6.slice. Jan 23 17:32:34.904077 systemd[1]: Created slice kubepods-besteffort-podbea6d6d6_6443_4534_ac1b_26cecad019a7.slice - libcontainer container kubepods-besteffort-podbea6d6d6_6443_4534_ac1b_26cecad019a7.slice. Jan 23 17:32:34.913448 systemd[1]: Created slice kubepods-burstable-pod917f100a_145b_467d_9973_bba49cd20f34.slice - libcontainer container kubepods-burstable-pod917f100a_145b_467d_9973_bba49cd20f34.slice. Jan 23 17:32:34.915314 containerd[2084]: time="2026-01-23T17:32:34.914577830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8t56,Uid:bea6d6d6-6443-4534-ac1b-26cecad019a7,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:34.926972 systemd[1]: Created slice kubepods-besteffort-podce786634_1bb7_4148_9461_d169f302e50f.slice - libcontainer container kubepods-besteffort-podce786634_1bb7_4148_9461_d169f302e50f.slice. Jan 23 17:32:34.942247 systemd[1]: Created slice kubepods-besteffort-pod3673ff07_a128_4686_9fb6_6fd2ab66f4db.slice - libcontainer container kubepods-besteffort-pod3673ff07_a128_4686_9fb6_6fd2ab66f4db.slice. Jan 23 17:32:34.951871 systemd[1]: Created slice kubepods-besteffort-pode966f1d6_8ee3_4476_b957_9bae66b7553b.slice - libcontainer container kubepods-besteffort-pode966f1d6_8ee3_4476_b957_9bae66b7553b.slice. Jan 23 17:32:34.957519 kubelet[3685]: I0123 17:32:34.957061 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sksz6\" (UniqueName: \"kubernetes.io/projected/29070044-7a78-4c22-ba4e-b03de4973ab6-kube-api-access-sksz6\") pod \"calico-apiserver-85fb74bcbb-d6mpj\" (UID: \"29070044-7a78-4c22-ba4e-b03de4973ab6\") " pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" Jan 23 17:32:34.957519 kubelet[3685]: I0123 17:32:34.957103 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmhv\" (UniqueName: \"kubernetes.io/projected/3673ff07-a128-4686-9fb6-6fd2ab66f4db-kube-api-access-zfmhv\") pod \"goldmane-7c778bb748-kwnqz\" (UID: \"3673ff07-a128-4686-9fb6-6fd2ab66f4db\") " pod="calico-system/goldmane-7c778bb748-kwnqz" Jan 23 17:32:34.957519 kubelet[3685]: I0123 17:32:34.957206 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgs7h\" (UniqueName: \"kubernetes.io/projected/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-kube-api-access-dgs7h\") pod \"whisker-56747bfb8-cz4p4\" (UID: \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\") " pod="calico-system/whisker-56747bfb8-cz4p4" Jan 23 17:32:34.957519 kubelet[3685]: I0123 17:32:34.957221 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29070044-7a78-4c22-ba4e-b03de4973ab6-calico-apiserver-certs\") pod \"calico-apiserver-85fb74bcbb-d6mpj\" (UID: \"29070044-7a78-4c22-ba4e-b03de4973ab6\") " pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" Jan 23 17:32:34.957519 kubelet[3685]: I0123 17:32:34.957235 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/917f100a-145b-467d-9973-bba49cd20f34-config-volume\") pod \"coredns-66bc5c9577-cnkb7\" (UID: \"917f100a-145b-467d-9973-bba49cd20f34\") " pod="kube-system/coredns-66bc5c9577-cnkb7" Jan 23 17:32:34.958759 kubelet[3685]: I0123 17:32:34.957247 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3673ff07-a128-4686-9fb6-6fd2ab66f4db-config\") pod \"goldmane-7c778bb748-kwnqz\" (UID: \"3673ff07-a128-4686-9fb6-6fd2ab66f4db\") " pod="calico-system/goldmane-7c778bb748-kwnqz" Jan 23 17:32:34.958759 kubelet[3685]: I0123 17:32:34.957257 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-whisker-backend-key-pair\") pod \"whisker-56747bfb8-cz4p4\" (UID: \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\") " pod="calico-system/whisker-56747bfb8-cz4p4" Jan 23 17:32:34.958759 kubelet[3685]: I0123 17:32:34.957378 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3673ff07-a128-4686-9fb6-6fd2ab66f4db-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-kwnqz\" (UID: \"3673ff07-a128-4686-9fb6-6fd2ab66f4db\") " pod="calico-system/goldmane-7c778bb748-kwnqz" Jan 23 17:32:34.958759 kubelet[3685]: I0123 17:32:34.957416 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-whisker-ca-bundle\") pod \"whisker-56747bfb8-cz4p4\" (UID: \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\") " pod="calico-system/whisker-56747bfb8-cz4p4" Jan 23 17:32:34.958759 kubelet[3685]: I0123 17:32:34.957427 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e966f1d6-8ee3-4476-b957-9bae66b7553b-tigera-ca-bundle\") pod \"calico-kube-controllers-55dddbdf7b-97cf9\" (UID: \"e966f1d6-8ee3-4476-b957-9bae66b7553b\") " pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" Jan 23 17:32:34.958862 kubelet[3685]: I0123 17:32:34.957523 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlh6j\" (UniqueName: \"kubernetes.io/projected/917f100a-145b-467d-9973-bba49cd20f34-kube-api-access-qlh6j\") pod \"coredns-66bc5c9577-cnkb7\" (UID: \"917f100a-145b-467d-9973-bba49cd20f34\") " pod="kube-system/coredns-66bc5c9577-cnkb7" Jan 23 17:32:34.958862 kubelet[3685]: I0123 17:32:34.957541 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8rdk\" (UniqueName: \"kubernetes.io/projected/ce786634-1bb7-4148-9461-d169f302e50f-kube-api-access-r8rdk\") pod \"calico-apiserver-85fb74bcbb-xldvx\" (UID: \"ce786634-1bb7-4148-9461-d169f302e50f\") " pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" Jan 23 17:32:34.958862 kubelet[3685]: I0123 17:32:34.957555 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce786634-1bb7-4148-9461-d169f302e50f-calico-apiserver-certs\") pod \"calico-apiserver-85fb74bcbb-xldvx\" (UID: \"ce786634-1bb7-4148-9461-d169f302e50f\") " pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" Jan 23 17:32:34.958862 kubelet[3685]: I0123 17:32:34.957569 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2bd\" (UniqueName: \"kubernetes.io/projected/e966f1d6-8ee3-4476-b957-9bae66b7553b-kube-api-access-qv2bd\") pod \"calico-kube-controllers-55dddbdf7b-97cf9\" (UID: \"e966f1d6-8ee3-4476-b957-9bae66b7553b\") " pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" Jan 23 17:32:34.958862 kubelet[3685]: I0123 17:32:34.957586 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3673ff07-a128-4686-9fb6-6fd2ab66f4db-goldmane-key-pair\") pod \"goldmane-7c778bb748-kwnqz\" (UID: \"3673ff07-a128-4686-9fb6-6fd2ab66f4db\") " pod="calico-system/goldmane-7c778bb748-kwnqz" Jan 23 17:32:35.010193 containerd[2084]: time="2026-01-23T17:32:35.010064817Z" level=error msg="Failed to destroy network for sandbox \"8b6718c03083695b4375ccaf0f3d52d5401fd6f18363c17cd876ea59d435c222\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.011928 systemd[1]: run-netns-cni\x2d96e6b196\x2d0f77\x2de360\x2dccc5\x2d148a4c3e6bc7.mount: Deactivated successfully. Jan 23 17:32:35.018000 containerd[2084]: time="2026-01-23T17:32:35.017942525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8t56,Uid:bea6d6d6-6443-4534-ac1b-26cecad019a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6718c03083695b4375ccaf0f3d52d5401fd6f18363c17cd876ea59d435c222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.018407 kubelet[3685]: E0123 17:32:35.018343 3685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6718c03083695b4375ccaf0f3d52d5401fd6f18363c17cd876ea59d435c222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.018478 kubelet[3685]: E0123 17:32:35.018439 3685 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6718c03083695b4375ccaf0f3d52d5401fd6f18363c17cd876ea59d435c222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8t56" Jan 23 17:32:35.018478 kubelet[3685]: E0123 17:32:35.018457 3685 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6718c03083695b4375ccaf0f3d52d5401fd6f18363c17cd876ea59d435c222\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8t56" Jan 23 17:32:35.018527 kubelet[3685]: E0123 17:32:35.018510 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b6718c03083695b4375ccaf0f3d52d5401fd6f18363c17cd876ea59d435c222\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:35.173136 containerd[2084]: time="2026-01-23T17:32:35.173094620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sf7ng,Uid:7fedd3f3-53a6-42e6-a84b-32923d7910c8,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:35.188075 containerd[2084]: time="2026-01-23T17:32:35.188013047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56747bfb8-cz4p4,Uid:58fee7e2-7a02-433b-9bb7-4f5cf670cf10,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:35.202679 containerd[2084]: time="2026-01-23T17:32:35.202499383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb74bcbb-d6mpj,Uid:29070044-7a78-4c22-ba4e-b03de4973ab6,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:32:35.224739 containerd[2084]: time="2026-01-23T17:32:35.224683715Z" level=error msg="Failed to destroy network for sandbox \"69828a0ff3738e02b13676ba76c4ffc23678d6a477cb394bf44558eb95a97451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.227869 containerd[2084]: time="2026-01-23T17:32:35.227735826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cnkb7,Uid:917f100a-145b-467d-9973-bba49cd20f34,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:35.232898 containerd[2084]: time="2026-01-23T17:32:35.232631667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sf7ng,Uid:7fedd3f3-53a6-42e6-a84b-32923d7910c8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69828a0ff3738e02b13676ba76c4ffc23678d6a477cb394bf44558eb95a97451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.234704 kubelet[3685]: E0123 17:32:35.234483 3685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69828a0ff3738e02b13676ba76c4ffc23678d6a477cb394bf44558eb95a97451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.236011 kubelet[3685]: E0123 17:32:35.234995 3685 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69828a0ff3738e02b13676ba76c4ffc23678d6a477cb394bf44558eb95a97451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sf7ng" Jan 23 17:32:35.236011 kubelet[3685]: E0123 17:32:35.235023 3685 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69828a0ff3738e02b13676ba76c4ffc23678d6a477cb394bf44558eb95a97451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sf7ng" Jan 23 17:32:35.236011 kubelet[3685]: E0123 17:32:35.235066 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sf7ng_kube-system(7fedd3f3-53a6-42e6-a84b-32923d7910c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sf7ng_kube-system(7fedd3f3-53a6-42e6-a84b-32923d7910c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69828a0ff3738e02b13676ba76c4ffc23678d6a477cb394bf44558eb95a97451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sf7ng" podUID="7fedd3f3-53a6-42e6-a84b-32923d7910c8" Jan 23 17:32:35.247712 containerd[2084]: time="2026-01-23T17:32:35.247670691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb74bcbb-xldvx,Uid:ce786634-1bb7-4148-9461-d169f302e50f,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:32:35.252769 containerd[2084]: time="2026-01-23T17:32:35.252445702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kwnqz,Uid:3673ff07-a128-4686-9fb6-6fd2ab66f4db,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:35.261740 containerd[2084]: time="2026-01-23T17:32:35.261700767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55dddbdf7b-97cf9,Uid:e966f1d6-8ee3-4476-b957-9bae66b7553b,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:35.278899 containerd[2084]: time="2026-01-23T17:32:35.278801930Z" level=error msg="Failed to destroy network for sandbox \"7220eca50aeded082ddbec80938203a80680d5fdb9bddbb2f0f7c2448c42ea59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.285267 containerd[2084]: time="2026-01-23T17:32:35.285145371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56747bfb8-cz4p4,Uid:58fee7e2-7a02-433b-9bb7-4f5cf670cf10,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7220eca50aeded082ddbec80938203a80680d5fdb9bddbb2f0f7c2448c42ea59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.285862 kubelet[3685]: E0123 17:32:35.285805 3685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7220eca50aeded082ddbec80938203a80680d5fdb9bddbb2f0f7c2448c42ea59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.286985 kubelet[3685]: E0123 17:32:35.286924 3685 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7220eca50aeded082ddbec80938203a80680d5fdb9bddbb2f0f7c2448c42ea59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56747bfb8-cz4p4" Jan 23 17:32:35.286985 kubelet[3685]: E0123 17:32:35.286974 3685 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7220eca50aeded082ddbec80938203a80680d5fdb9bddbb2f0f7c2448c42ea59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56747bfb8-cz4p4" Jan 23 17:32:35.287177 kubelet[3685]: E0123 17:32:35.287052 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56747bfb8-cz4p4_calico-system(58fee7e2-7a02-433b-9bb7-4f5cf670cf10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56747bfb8-cz4p4_calico-system(58fee7e2-7a02-433b-9bb7-4f5cf670cf10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7220eca50aeded082ddbec80938203a80680d5fdb9bddbb2f0f7c2448c42ea59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56747bfb8-cz4p4" podUID="58fee7e2-7a02-433b-9bb7-4f5cf670cf10" Jan 23 17:32:35.290373 containerd[2084]: time="2026-01-23T17:32:35.290253884Z" level=error msg="Failed to destroy network for sandbox \"564a58ee0ebaa01261e795e44406bc8e479f1c7047aa3e23d7e407b3b3228d4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.303408 containerd[2084]: time="2026-01-23T17:32:35.303188616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb74bcbb-d6mpj,Uid:29070044-7a78-4c22-ba4e-b03de4973ab6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"564a58ee0ebaa01261e795e44406bc8e479f1c7047aa3e23d7e407b3b3228d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.303572 kubelet[3685]: E0123 17:32:35.303442 3685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"564a58ee0ebaa01261e795e44406bc8e479f1c7047aa3e23d7e407b3b3228d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.303572 kubelet[3685]: E0123 17:32:35.303493 3685 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"564a58ee0ebaa01261e795e44406bc8e479f1c7047aa3e23d7e407b3b3228d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" Jan 23 17:32:35.303572 kubelet[3685]: E0123 17:32:35.303508 3685 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"564a58ee0ebaa01261e795e44406bc8e479f1c7047aa3e23d7e407b3b3228d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" Jan 23 17:32:35.303652 kubelet[3685]: E0123 17:32:35.303559 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85fb74bcbb-d6mpj_calico-apiserver(29070044-7a78-4c22-ba4e-b03de4973ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85fb74bcbb-d6mpj_calico-apiserver(29070044-7a78-4c22-ba4e-b03de4973ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"564a58ee0ebaa01261e795e44406bc8e479f1c7047aa3e23d7e407b3b3228d4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:32:35.329107 containerd[2084]: time="2026-01-23T17:32:35.329036630Z" level=error msg="Failed to destroy network for sandbox \"643f3249f40b3e51d931ec5497137d4794f985584b20fbe6635dde980bd19197\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.338599 containerd[2084]: time="2026-01-23T17:32:35.338469631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cnkb7,Uid:917f100a-145b-467d-9973-bba49cd20f34,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"643f3249f40b3e51d931ec5497137d4794f985584b20fbe6635dde980bd19197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.339899 kubelet[3685]: E0123 17:32:35.338950 3685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"643f3249f40b3e51d931ec5497137d4794f985584b20fbe6635dde980bd19197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.339899 kubelet[3685]: E0123 17:32:35.339031 3685 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"643f3249f40b3e51d931ec5497137d4794f985584b20fbe6635dde980bd19197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cnkb7" Jan 23 17:32:35.339899 kubelet[3685]: E0123 17:32:35.339072 3685 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"643f3249f40b3e51d931ec5497137d4794f985584b20fbe6635dde980bd19197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cnkb7" Jan 23 17:32:35.340051 kubelet[3685]: E0123 17:32:35.339131 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-cnkb7_kube-system(917f100a-145b-467d-9973-bba49cd20f34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-cnkb7_kube-system(917f100a-145b-467d-9973-bba49cd20f34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"643f3249f40b3e51d931ec5497137d4794f985584b20fbe6635dde980bd19197\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cnkb7" podUID="917f100a-145b-467d-9973-bba49cd20f34" Jan 23 17:32:35.357045 containerd[2084]: time="2026-01-23T17:32:35.356985201Z" level=error msg="Failed to destroy network for sandbox \"3a03d7efd6717f3d5419d4f661eb990ffd8aee8cc9438fa02f5d8603d42b7cf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.364220 containerd[2084]: time="2026-01-23T17:32:35.364158510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55dddbdf7b-97cf9,Uid:e966f1d6-8ee3-4476-b957-9bae66b7553b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a03d7efd6717f3d5419d4f661eb990ffd8aee8cc9438fa02f5d8603d42b7cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.365370 kubelet[3685]: E0123 17:32:35.364981 3685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a03d7efd6717f3d5419d4f661eb990ffd8aee8cc9438fa02f5d8603d42b7cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.365370 kubelet[3685]: E0123 17:32:35.365036 3685 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a03d7efd6717f3d5419d4f661eb990ffd8aee8cc9438fa02f5d8603d42b7cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" Jan 23 17:32:35.365370 kubelet[3685]: E0123 17:32:35.365055 3685 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a03d7efd6717f3d5419d4f661eb990ffd8aee8cc9438fa02f5d8603d42b7cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" Jan 23 17:32:35.365540 kubelet[3685]: E0123 17:32:35.365098 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55dddbdf7b-97cf9_calico-system(e966f1d6-8ee3-4476-b957-9bae66b7553b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55dddbdf7b-97cf9_calico-system(e966f1d6-8ee3-4476-b957-9bae66b7553b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a03d7efd6717f3d5419d4f661eb990ffd8aee8cc9438fa02f5d8603d42b7cf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:32:35.372017 containerd[2084]: time="2026-01-23T17:32:35.371965190Z" level=error msg="Failed to destroy network for sandbox \"cff21ec974ef50e44cb0b4bc1d7ef73607233fdf98b74a1e2bf7bc0f04a40a0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.373999 containerd[2084]: time="2026-01-23T17:32:35.373958406Z" level=error msg="Failed to destroy network for sandbox \"f350136d5e9a450adf6f7b1db95e51661d80197f20a27038b597192c793a7f20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.378163 containerd[2084]: time="2026-01-23T17:32:35.378119222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kwnqz,Uid:3673ff07-a128-4686-9fb6-6fd2ab66f4db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff21ec974ef50e44cb0b4bc1d7ef73607233fdf98b74a1e2bf7bc0f04a40a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.378696 kubelet[3685]: E0123 17:32:35.378629 3685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff21ec974ef50e44cb0b4bc1d7ef73607233fdf98b74a1e2bf7bc0f04a40a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.379156 kubelet[3685]: E0123 17:32:35.378683 3685 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff21ec974ef50e44cb0b4bc1d7ef73607233fdf98b74a1e2bf7bc0f04a40a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-kwnqz" Jan 23 17:32:35.379156 kubelet[3685]: E0123 17:32:35.378815 3685 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cff21ec974ef50e44cb0b4bc1d7ef73607233fdf98b74a1e2bf7bc0f04a40a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-kwnqz" Jan 23 17:32:35.379340 kubelet[3685]: E0123 17:32:35.379304 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-kwnqz_calico-system(3673ff07-a128-4686-9fb6-6fd2ab66f4db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-kwnqz_calico-system(3673ff07-a128-4686-9fb6-6fd2ab66f4db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cff21ec974ef50e44cb0b4bc1d7ef73607233fdf98b74a1e2bf7bc0f04a40a0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:32:35.383798 containerd[2084]: time="2026-01-23T17:32:35.383743015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb74bcbb-xldvx,Uid:ce786634-1bb7-4148-9461-d169f302e50f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f350136d5e9a450adf6f7b1db95e51661d80197f20a27038b597192c793a7f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.384104 kubelet[3685]: E0123 17:32:35.384079 3685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f350136d5e9a450adf6f7b1db95e51661d80197f20a27038b597192c793a7f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:32:35.384146 kubelet[3685]: E0123 17:32:35.384132 3685 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f350136d5e9a450adf6f7b1db95e51661d80197f20a27038b597192c793a7f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" Jan 23 17:32:35.384166 kubelet[3685]: E0123 17:32:35.384147 3685 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f350136d5e9a450adf6f7b1db95e51661d80197f20a27038b597192c793a7f20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" Jan 23 17:32:35.384447 kubelet[3685]: E0123 17:32:35.384243 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85fb74bcbb-xldvx_calico-apiserver(ce786634-1bb7-4148-9461-d169f302e50f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85fb74bcbb-xldvx_calico-apiserver(ce786634-1bb7-4148-9461-d169f302e50f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f350136d5e9a450adf6f7b1db95e51661d80197f20a27038b597192c793a7f20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:32:35.577608 containerd[2084]: time="2026-01-23T17:32:35.576866843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 17:32:39.557456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147759170.mount: Deactivated successfully. Jan 23 17:32:40.547553 containerd[2084]: time="2026-01-23T17:32:40.547406565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:40.595685 containerd[2084]: time="2026-01-23T17:32:40.595141825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 23 17:32:40.599088 containerd[2084]: time="2026-01-23T17:32:40.599041813Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:40.604416 containerd[2084]: time="2026-01-23T17:32:40.604377492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:32:40.604894 containerd[2084]: time="2026-01-23T17:32:40.604839624Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 5.027928531s" Jan 23 17:32:40.604942 containerd[2084]: time="2026-01-23T17:32:40.604896466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 17:32:40.650244 containerd[2084]: time="2026-01-23T17:32:40.650186567Z" level=info msg="CreateContainer within sandbox \"ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 17:32:40.958635 containerd[2084]: time="2026-01-23T17:32:40.957913783Z" level=info msg="Container 5844781898e2d4b6313c8127cd5be1e2c93cbe0a4a04375b93c61e02e68ed7dd: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:40.960253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368953682.mount: Deactivated successfully. Jan 23 17:32:40.981941 containerd[2084]: time="2026-01-23T17:32:40.981389928Z" level=info msg="CreateContainer within sandbox \"ce1a28a92da0c2f7be55f07fc7d9c164d5e6d55e126b2d5ddae3c7304dde1379\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5844781898e2d4b6313c8127cd5be1e2c93cbe0a4a04375b93c61e02e68ed7dd\"" Jan 23 17:32:40.982889 containerd[2084]: time="2026-01-23T17:32:40.982243836Z" level=info msg="StartContainer for \"5844781898e2d4b6313c8127cd5be1e2c93cbe0a4a04375b93c61e02e68ed7dd\"" Jan 23 17:32:40.983839 containerd[2084]: time="2026-01-23T17:32:40.983801941Z" level=info msg="connecting to shim 5844781898e2d4b6313c8127cd5be1e2c93cbe0a4a04375b93c61e02e68ed7dd" address="unix:///run/containerd/s/9392e13e552d87f195047f2e31f75d6bb6cbd248dc83dc2a407ea3be7c945cb7" protocol=ttrpc version=3 Jan 23 17:32:41.006096 systemd[1]: Started cri-containerd-5844781898e2d4b6313c8127cd5be1e2c93cbe0a4a04375b93c61e02e68ed7dd.scope - libcontainer container 5844781898e2d4b6313c8127cd5be1e2c93cbe0a4a04375b93c61e02e68ed7dd. Jan 23 17:32:41.058000 audit: BPF prog-id=196 op=LOAD Jan 23 17:32:41.058000 audit[4623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4176 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:41.083667 kernel: audit: type=1334 audit(1769189561.058:602): prog-id=196 op=LOAD Jan 23 17:32:41.083792 kernel: audit: type=1300 audit(1769189561.058:602): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4176 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:41.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538343437383138393865326434623633313363383132376364356265 Jan 23 17:32:41.058000 audit: BPF prog-id=197 op=LOAD Jan 23 17:32:41.106011 kernel: audit: type=1327 audit(1769189561.058:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538343437383138393865326434623633313363383132376364356265 Jan 23 17:32:41.106089 kernel: audit: type=1334 audit(1769189561.058:603): prog-id=197 op=LOAD Jan 23 17:32:41.058000 audit[4623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4176 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:41.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538343437383138393865326434623633313363383132376364356265 Jan 23 17:32:41.143450 kernel: audit: type=1300 audit(1769189561.058:603): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4176 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:41.143558 kernel: audit: type=1327 audit(1769189561.058:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538343437383138393865326434623633313363383132376364356265 Jan 23 17:32:41.058000 audit: BPF prog-id=197 op=UNLOAD Jan 23 17:32:41.058000 audit[4623]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:41.166571 kernel: audit: type=1334 audit(1769189561.058:604): prog-id=197 op=UNLOAD Jan 23 17:32:41.166678 kernel: audit: type=1300 audit(1769189561.058:604): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:41.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538343437383138393865326434623633313363383132376364356265 Jan 23 17:32:41.230762 kernel: audit: type=1327 audit(1769189561.058:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538343437383138393865326434623633313363383132376364356265 Jan 23 17:32:41.058000 audit: BPF prog-id=196 op=UNLOAD Jan 23 17:32:41.235027 containerd[2084]: time="2026-01-23T17:32:41.233622613Z" level=info msg="StartContainer for \"5844781898e2d4b6313c8127cd5be1e2c93cbe0a4a04375b93c61e02e68ed7dd\" returns successfully" Jan 23 17:32:41.238406 kernel: audit: type=1334 audit(1769189561.058:605): prog-id=196 op=UNLOAD Jan 23 17:32:41.058000 audit[4623]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4176 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:41.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538343437383138393865326434623633313363383132376364356265 Jan 23 17:32:41.058000 audit: BPF prog-id=198 op=LOAD Jan 23 17:32:41.058000 audit[4623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4176 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:41.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538343437383138393865326434623633313363383132376364356265 Jan 23 17:32:41.431032 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 17:32:41.431188 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 17:32:41.595831 kubelet[3685]: I0123 17:32:41.595669 3685 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgs7h\" (UniqueName: \"kubernetes.io/projected/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-kube-api-access-dgs7h\") pod \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\" (UID: \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\") " Jan 23 17:32:41.595831 kubelet[3685]: I0123 17:32:41.595714 3685 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-whisker-backend-key-pair\") pod \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\" (UID: \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\") " Jan 23 17:32:41.595831 kubelet[3685]: I0123 17:32:41.595734 3685 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-whisker-ca-bundle\") pod \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\" (UID: \"58fee7e2-7a02-433b-9bb7-4f5cf670cf10\") " Jan 23 17:32:41.598203 kubelet[3685]: I0123 17:32:41.597035 3685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "58fee7e2-7a02-433b-9bb7-4f5cf670cf10" (UID: "58fee7e2-7a02-433b-9bb7-4f5cf670cf10"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:32:41.604319 kubelet[3685]: I0123 17:32:41.604075 3685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "58fee7e2-7a02-433b-9bb7-4f5cf670cf10" (UID: "58fee7e2-7a02-433b-9bb7-4f5cf670cf10"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:32:41.605288 kubelet[3685]: I0123 17:32:41.605244 3685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-kube-api-access-dgs7h" (OuterVolumeSpecName: "kube-api-access-dgs7h") pod "58fee7e2-7a02-433b-9bb7-4f5cf670cf10" (UID: "58fee7e2-7a02-433b-9bb7-4f5cf670cf10"). InnerVolumeSpecName "kube-api-access-dgs7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:32:41.610301 systemd[1]: var-lib-kubelet-pods-58fee7e2\x2d7a02\x2d433b\x2d9bb7\x2d4f5cf670cf10-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddgs7h.mount: Deactivated successfully. Jan 23 17:32:41.614115 systemd[1]: var-lib-kubelet-pods-58fee7e2\x2d7a02\x2d433b\x2d9bb7\x2d4f5cf670cf10-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 17:32:41.636475 kubelet[3685]: I0123 17:32:41.636410 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x8jq4" podStartSLOduration=2.72129861 podStartE2EDuration="17.636386122s" podCreationTimestamp="2026-01-23 17:32:24 +0000 UTC" firstStartedPulling="2026-01-23 17:32:25.690790843 +0000 UTC m=+21.320337130" lastFinishedPulling="2026-01-23 17:32:40.605878355 +0000 UTC m=+36.235424642" observedRunningTime="2026-01-23 17:32:41.635512221 +0000 UTC m=+37.265058540" watchObservedRunningTime="2026-01-23 17:32:41.636386122 +0000 UTC m=+37.265932409" Jan 23 17:32:41.696269 kubelet[3685]: I0123 17:32:41.696189 3685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dgs7h\" (UniqueName: \"kubernetes.io/projected/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-kube-api-access-dgs7h\") on node \"ci-4547.1.0-a-71c1b0067a\" DevicePath \"\"" Jan 23 17:32:41.696269 kubelet[3685]: I0123 17:32:41.696224 3685 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-whisker-backend-key-pair\") on node \"ci-4547.1.0-a-71c1b0067a\" DevicePath \"\"" Jan 23 17:32:41.696269 kubelet[3685]: I0123 17:32:41.696234 3685 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58fee7e2-7a02-433b-9bb7-4f5cf670cf10-whisker-ca-bundle\") on node \"ci-4547.1.0-a-71c1b0067a\" DevicePath \"\"" Jan 23 17:32:41.900445 systemd[1]: Removed slice kubepods-besteffort-pod58fee7e2_7a02_433b_9bb7_4f5cf670cf10.slice - libcontainer container kubepods-besteffort-pod58fee7e2_7a02_433b_9bb7_4f5cf670cf10.slice. Jan 23 17:32:42.007637 systemd[1]: Created slice kubepods-besteffort-podc0c6a4bb_e851_48c7_afc7_8d5b88a4086b.slice - libcontainer container kubepods-besteffort-podc0c6a4bb_e851_48c7_afc7_8d5b88a4086b.slice. Jan 23 17:32:42.098173 kubelet[3685]: I0123 17:32:42.098032 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c0c6a4bb-e851-48c7-afc7-8d5b88a4086b-whisker-backend-key-pair\") pod \"whisker-844d8cf486-rp7bn\" (UID: \"c0c6a4bb-e851-48c7-afc7-8d5b88a4086b\") " pod="calico-system/whisker-844d8cf486-rp7bn" Jan 23 17:32:42.098173 kubelet[3685]: I0123 17:32:42.098076 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0c6a4bb-e851-48c7-afc7-8d5b88a4086b-whisker-ca-bundle\") pod \"whisker-844d8cf486-rp7bn\" (UID: \"c0c6a4bb-e851-48c7-afc7-8d5b88a4086b\") " pod="calico-system/whisker-844d8cf486-rp7bn" Jan 23 17:32:42.098173 kubelet[3685]: I0123 17:32:42.098092 3685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz8rn\" (UniqueName: \"kubernetes.io/projected/c0c6a4bb-e851-48c7-afc7-8d5b88a4086b-kube-api-access-kz8rn\") pod \"whisker-844d8cf486-rp7bn\" (UID: \"c0c6a4bb-e851-48c7-afc7-8d5b88a4086b\") " pod="calico-system/whisker-844d8cf486-rp7bn" Jan 23 17:32:42.318727 containerd[2084]: time="2026-01-23T17:32:42.318465604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-844d8cf486-rp7bn,Uid:c0c6a4bb-e851-48c7-afc7-8d5b88a4086b,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:42.458350 kubelet[3685]: I0123 17:32:42.458302 3685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58fee7e2-7a02-433b-9bb7-4f5cf670cf10" path="/var/lib/kubelet/pods/58fee7e2-7a02-433b-9bb7-4f5cf670cf10/volumes" Jan 23 17:32:42.464984 systemd-networkd[1662]: cali1dcc8aa8b42: Link UP Jan 23 17:32:42.465169 systemd-networkd[1662]: cali1dcc8aa8b42: Gained carrier Jan 23 17:32:42.484540 containerd[2084]: 2026-01-23 17:32:42.341 [INFO][4709] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:32:42.484540 containerd[2084]: 2026-01-23 17:32:42.381 [INFO][4709] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0 whisker-844d8cf486- calico-system c0c6a4bb-e851-48c7-afc7-8d5b88a4086b 907 0 2026-01-23 17:32:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:844d8cf486 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.1.0-a-71c1b0067a whisker-844d8cf486-rp7bn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1dcc8aa8b42 [] [] }} ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Namespace="calico-system" Pod="whisker-844d8cf486-rp7bn" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-" Jan 23 17:32:42.484540 containerd[2084]: 2026-01-23 17:32:42.381 [INFO][4709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Namespace="calico-system" Pod="whisker-844d8cf486-rp7bn" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" Jan 23 17:32:42.484540 containerd[2084]: 2026-01-23 17:32:42.402 [INFO][4720] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" HandleID="k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Workload="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.402 [INFO][4720] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" HandleID="k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Workload="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-71c1b0067a", "pod":"whisker-844d8cf486-rp7bn", "timestamp":"2026-01-23 17:32:42.40260424 +0000 UTC"}, Hostname:"ci-4547.1.0-a-71c1b0067a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.402 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.402 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.402 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-71c1b0067a' Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.408 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.412 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.415 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.417 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.484911 containerd[2084]: 2026-01-23 17:32:42.420 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.485064 containerd[2084]: 2026-01-23 17:32:42.420 [INFO][4720] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.485064 containerd[2084]: 2026-01-23 17:32:42.421 [INFO][4720] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611 Jan 23 17:32:42.485064 containerd[2084]: 2026-01-23 17:32:42.426 [INFO][4720] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.485064 containerd[2084]: 2026-01-23 17:32:42.440 [INFO][4720] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.1/26] block=192.168.117.0/26 handle="k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.485064 containerd[2084]: 2026-01-23 17:32:42.440 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.1/26] handle="k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:42.485064 containerd[2084]: 2026-01-23 17:32:42.441 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:32:42.485064 containerd[2084]: 2026-01-23 17:32:42.441 [INFO][4720] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.1/26] IPv6=[] ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" HandleID="k8s-pod-network.aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Workload="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" Jan 23 17:32:42.485161 containerd[2084]: 2026-01-23 17:32:42.443 [INFO][4709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Namespace="calico-system" Pod="whisker-844d8cf486-rp7bn" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0", GenerateName:"whisker-844d8cf486-", Namespace:"calico-system", SelfLink:"", UID:"c0c6a4bb-e851-48c7-afc7-8d5b88a4086b", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"844d8cf486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"", Pod:"whisker-844d8cf486-rp7bn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1dcc8aa8b42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:42.485161 containerd[2084]: 2026-01-23 17:32:42.443 [INFO][4709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.1/32] ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Namespace="calico-system" Pod="whisker-844d8cf486-rp7bn" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" Jan 23 17:32:42.485210 containerd[2084]: 2026-01-23 17:32:42.443 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1dcc8aa8b42 ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Namespace="calico-system" Pod="whisker-844d8cf486-rp7bn" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" Jan 23 17:32:42.485210 containerd[2084]: 2026-01-23 17:32:42.464 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Namespace="calico-system" Pod="whisker-844d8cf486-rp7bn" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" Jan 23 17:32:42.485241 containerd[2084]: 2026-01-23 17:32:42.464 [INFO][4709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Namespace="calico-system" Pod="whisker-844d8cf486-rp7bn" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0", GenerateName:"whisker-844d8cf486-", Namespace:"calico-system", SelfLink:"", UID:"c0c6a4bb-e851-48c7-afc7-8d5b88a4086b", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"844d8cf486", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611", Pod:"whisker-844d8cf486-rp7bn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1dcc8aa8b42", MAC:"a6:b8:c6:2c:a1:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:42.485274 containerd[2084]: 2026-01-23 17:32:42.482 [INFO][4709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" Namespace="calico-system" Pod="whisker-844d8cf486-rp7bn" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-whisker--844d8cf486--rp7bn-eth0" Jan 23 17:32:42.529195 containerd[2084]: time="2026-01-23T17:32:42.529148161Z" level=info msg="connecting to shim aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611" address="unix:///run/containerd/s/44a73532d42d34f50610c42385fbd7f5e19146697a2d1fc98070405ca6efe3f9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:42.551081 systemd[1]: Started cri-containerd-aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611.scope - libcontainer container aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611. Jan 23 17:32:42.557000 audit: BPF prog-id=199 op=LOAD Jan 23 17:32:42.558000 audit: BPF prog-id=200 op=LOAD Jan 23 17:32:42.558000 audit[4754]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4743 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:42.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165616331363130616638343264393038336434613065376634353262 Jan 23 17:32:42.558000 audit: BPF prog-id=200 op=UNLOAD Jan 23 17:32:42.558000 audit[4754]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4743 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:42.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165616331363130616638343264393038336434613065376634353262 Jan 23 17:32:42.558000 audit: BPF prog-id=201 op=LOAD Jan 23 17:32:42.558000 audit[4754]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4743 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:42.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165616331363130616638343264393038336434613065376634353262 Jan 23 17:32:42.559000 audit: BPF prog-id=202 op=LOAD Jan 23 17:32:42.559000 audit[4754]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4743 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:42.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165616331363130616638343264393038336434613065376634353262 Jan 23 17:32:42.559000 audit: BPF prog-id=202 op=UNLOAD Jan 23 17:32:42.559000 audit[4754]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4743 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:42.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165616331363130616638343264393038336434613065376634353262 Jan 23 17:32:42.559000 audit: BPF prog-id=201 op=UNLOAD Jan 23 17:32:42.559000 audit[4754]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4743 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:42.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165616331363130616638343264393038336434613065376634353262 Jan 23 17:32:42.559000 audit: BPF prog-id=203 op=LOAD Jan 23 17:32:42.559000 audit[4754]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4743 pid=4754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:42.559000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165616331363130616638343264393038336434613065376634353262 Jan 23 17:32:42.583587 containerd[2084]: time="2026-01-23T17:32:42.583469561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-844d8cf486-rp7bn,Uid:c0c6a4bb-e851-48c7-afc7-8d5b88a4086b,Namespace:calico-system,Attempt:0,} returns sandbox id \"aeac1610af842d9083d4a0e7f452ba1287464f7bc7deb32365401c0e6ac86611\"" Jan 23 17:32:42.586379 containerd[2084]: time="2026-01-23T17:32:42.586000924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:32:42.833924 containerd[2084]: time="2026-01-23T17:32:42.833429375Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:42.836397 containerd[2084]: time="2026-01-23T17:32:42.836279663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:32:42.836397 containerd[2084]: time="2026-01-23T17:32:42.836325441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:42.839895 kubelet[3685]: E0123 17:32:42.839575 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:32:42.839895 kubelet[3685]: E0123 17:32:42.839750 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:32:42.840868 kubelet[3685]: E0123 17:32:42.840469 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-844d8cf486-rp7bn_calico-system(c0c6a4bb-e851-48c7-afc7-8d5b88a4086b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:42.842941 containerd[2084]: time="2026-01-23T17:32:42.842781696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:32:43.221754 containerd[2084]: time="2026-01-23T17:32:43.221554477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:43.225536 containerd[2084]: time="2026-01-23T17:32:43.225425351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:32:43.225536 containerd[2084]: time="2026-01-23T17:32:43.225474809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:43.226118 kubelet[3685]: E0123 17:32:43.225823 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:32:43.226118 kubelet[3685]: E0123 17:32:43.225937 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:32:43.226118 kubelet[3685]: E0123 17:32:43.226013 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-844d8cf486-rp7bn_calico-system(c0c6a4bb-e851-48c7-afc7-8d5b88a4086b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:43.226118 kubelet[3685]: E0123 17:32:43.226061 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:32:43.601834 kubelet[3685]: E0123 17:32:43.601364 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:32:43.627000 audit[4898]: NETFILTER_CFG table=filter:118 family=2 entries=22 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:43.627000 audit[4898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffea793820 a2=0 a3=1 items=0 ppid=3791 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:43.631000 audit[4898]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:43.631000 audit[4898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea793820 a2=0 a3=1 items=0 ppid=3791 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:43.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:43.929083 systemd-networkd[1662]: cali1dcc8aa8b42: Gained IPv6LL Jan 23 17:32:45.462930 containerd[2084]: time="2026-01-23T17:32:45.462816668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb74bcbb-xldvx,Uid:ce786634-1bb7-4148-9461-d169f302e50f,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:32:45.561961 systemd-networkd[1662]: cali826f6d74620: Link UP Jan 23 17:32:45.562100 systemd-networkd[1662]: cali826f6d74620: Gained carrier Jan 23 17:32:45.578600 containerd[2084]: 2026-01-23 17:32:45.491 [INFO][4941] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:32:45.578600 containerd[2084]: 2026-01-23 17:32:45.502 [INFO][4941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0 calico-apiserver-85fb74bcbb- calico-apiserver ce786634-1bb7-4148-9461-d169f302e50f 843 0 2026-01-23 17:32:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85fb74bcbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-a-71c1b0067a calico-apiserver-85fb74bcbb-xldvx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali826f6d74620 [] [] }} ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-xldvx" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-" Jan 23 17:32:45.578600 containerd[2084]: 2026-01-23 17:32:45.502 [INFO][4941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-xldvx" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" Jan 23 17:32:45.578600 containerd[2084]: 2026-01-23 17:32:45.523 [INFO][4954] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" HandleID="k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.523 [INFO][4954] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" HandleID="k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-a-71c1b0067a", "pod":"calico-apiserver-85fb74bcbb-xldvx", "timestamp":"2026-01-23 17:32:45.52336108 +0000 UTC"}, Hostname:"ci-4547.1.0-a-71c1b0067a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.523 [INFO][4954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.523 [INFO][4954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.523 [INFO][4954] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-71c1b0067a' Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.529 [INFO][4954] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.532 [INFO][4954] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.536 [INFO][4954] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.537 [INFO][4954] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.578899 containerd[2084]: 2026-01-23 17:32:45.540 [INFO][4954] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.579165 containerd[2084]: 2026-01-23 17:32:45.540 [INFO][4954] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.579165 containerd[2084]: 2026-01-23 17:32:45.541 [INFO][4954] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75 Jan 23 17:32:45.579165 containerd[2084]: 2026-01-23 17:32:45.546 [INFO][4954] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.579165 containerd[2084]: 2026-01-23 17:32:45.556 [INFO][4954] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.2/26] block=192.168.117.0/26 handle="k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.579165 containerd[2084]: 2026-01-23 17:32:45.556 [INFO][4954] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.2/26] handle="k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:45.579165 containerd[2084]: 2026-01-23 17:32:45.556 [INFO][4954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:32:45.579165 containerd[2084]: 2026-01-23 17:32:45.557 [INFO][4954] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.2/26] IPv6=[] ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" HandleID="k8s-pod-network.3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" Jan 23 17:32:45.579270 containerd[2084]: 2026-01-23 17:32:45.559 [INFO][4941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-xldvx" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0", GenerateName:"calico-apiserver-85fb74bcbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce786634-1bb7-4148-9461-d169f302e50f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85fb74bcbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"", Pod:"calico-apiserver-85fb74bcbb-xldvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali826f6d74620", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:45.579311 containerd[2084]: 2026-01-23 17:32:45.559 [INFO][4941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.2/32] ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-xldvx" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" Jan 23 17:32:45.579311 containerd[2084]: 2026-01-23 17:32:45.559 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali826f6d74620 ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-xldvx" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" Jan 23 17:32:45.579311 containerd[2084]: 2026-01-23 17:32:45.561 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-xldvx" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" Jan 23 17:32:45.579354 containerd[2084]: 2026-01-23 17:32:45.561 [INFO][4941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-xldvx" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0", GenerateName:"calico-apiserver-85fb74bcbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce786634-1bb7-4148-9461-d169f302e50f", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85fb74bcbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75", Pod:"calico-apiserver-85fb74bcbb-xldvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali826f6d74620", MAC:"ee:c5:bd:54:fe:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:45.579387 containerd[2084]: 2026-01-23 17:32:45.576 [INFO][4941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-xldvx" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--xldvx-eth0" Jan 23 17:32:45.612407 containerd[2084]: time="2026-01-23T17:32:45.612333787Z" level=info msg="connecting to shim 3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75" address="unix:///run/containerd/s/78df899c8bc6a94781b522ec0b2acd72af000aac7ddd584c9813f57f8ffa2d39" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:45.634065 systemd[1]: Started cri-containerd-3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75.scope - libcontainer container 3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75. Jan 23 17:32:45.642000 audit: BPF prog-id=204 op=LOAD Jan 23 17:32:45.642000 audit: BPF prog-id=205 op=LOAD Jan 23 17:32:45.642000 audit[4987]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4976 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:45.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343830616239363131343033306239373836633931623561643632 Jan 23 17:32:45.642000 audit: BPF prog-id=205 op=UNLOAD Jan 23 17:32:45.642000 audit[4987]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4976 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:45.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343830616239363131343033306239373836633931623561643632 Jan 23 17:32:45.642000 audit: BPF prog-id=206 op=LOAD Jan 23 17:32:45.642000 audit[4987]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4976 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:45.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343830616239363131343033306239373836633931623561643632 Jan 23 17:32:45.642000 audit: BPF prog-id=207 op=LOAD Jan 23 17:32:45.642000 audit[4987]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4976 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:45.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343830616239363131343033306239373836633931623561643632 Jan 23 17:32:45.642000 audit: BPF prog-id=207 op=UNLOAD Jan 23 17:32:45.642000 audit[4987]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4976 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:45.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343830616239363131343033306239373836633931623561643632 Jan 23 17:32:45.642000 audit: BPF prog-id=206 op=UNLOAD Jan 23 17:32:45.642000 audit[4987]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4976 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:45.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343830616239363131343033306239373836633931623561643632 Jan 23 17:32:45.642000 audit: BPF prog-id=208 op=LOAD Jan 23 17:32:45.642000 audit[4987]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4976 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:45.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362343830616239363131343033306239373836633931623561643632 Jan 23 17:32:45.666358 containerd[2084]: time="2026-01-23T17:32:45.666314357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb74bcbb-xldvx,Uid:ce786634-1bb7-4148-9461-d169f302e50f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3b480ab96114030b9786c91b5ad627351cdf38d43c118fcc0846d4b4739d3f75\"" Jan 23 17:32:45.669056 containerd[2084]: time="2026-01-23T17:32:45.668999646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:32:45.920616 containerd[2084]: time="2026-01-23T17:32:45.920564109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:45.923650 containerd[2084]: time="2026-01-23T17:32:45.923607495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:32:45.923735 containerd[2084]: time="2026-01-23T17:32:45.923668946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:45.924025 kubelet[3685]: E0123 17:32:45.923931 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:32:45.924025 kubelet[3685]: E0123 17:32:45.923982 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:32:45.924378 kubelet[3685]: E0123 17:32:45.924053 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85fb74bcbb-xldvx_calico-apiserver(ce786634-1bb7-4148-9461-d169f302e50f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:45.924378 kubelet[3685]: E0123 17:32:45.924082 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:32:46.462599 containerd[2084]: time="2026-01-23T17:32:46.462399892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kwnqz,Uid:3673ff07-a128-4686-9fb6-6fd2ab66f4db,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:46.556808 systemd-networkd[1662]: caliacd656eb6c4: Link UP Jan 23 17:32:46.557063 systemd-networkd[1662]: caliacd656eb6c4: Gained carrier Jan 23 17:32:46.573961 containerd[2084]: 2026-01-23 17:32:46.490 [INFO][5035] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:32:46.573961 containerd[2084]: 2026-01-23 17:32:46.499 [INFO][5035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0 goldmane-7c778bb748- calico-system 3673ff07-a128-4686-9fb6-6fd2ab66f4db 844 0 2026-01-23 17:32:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.1.0-a-71c1b0067a goldmane-7c778bb748-kwnqz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliacd656eb6c4 [] [] }} ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Namespace="calico-system" Pod="goldmane-7c778bb748-kwnqz" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-" Jan 23 17:32:46.573961 containerd[2084]: 2026-01-23 17:32:46.499 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Namespace="calico-system" Pod="goldmane-7c778bb748-kwnqz" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" Jan 23 17:32:46.573961 containerd[2084]: 2026-01-23 17:32:46.518 [INFO][5047] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" HandleID="k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Workload="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.518 [INFO][5047] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" HandleID="k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Workload="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-71c1b0067a", "pod":"goldmane-7c778bb748-kwnqz", "timestamp":"2026-01-23 17:32:46.518212416 +0000 UTC"}, Hostname:"ci-4547.1.0-a-71c1b0067a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.518 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.518 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.518 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-71c1b0067a' Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.523 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.527 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.531 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.533 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574599 containerd[2084]: 2026-01-23 17:32:46.534 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574738 containerd[2084]: 2026-01-23 17:32:46.534 [INFO][5047] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574738 containerd[2084]: 2026-01-23 17:32:46.536 [INFO][5047] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d Jan 23 17:32:46.574738 containerd[2084]: 2026-01-23 17:32:46.541 [INFO][5047] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574738 containerd[2084]: 2026-01-23 17:32:46.552 [INFO][5047] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.3/26] block=192.168.117.0/26 handle="k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574738 containerd[2084]: 2026-01-23 17:32:46.552 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.3/26] handle="k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:46.574738 containerd[2084]: 2026-01-23 17:32:46.552 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:32:46.574738 containerd[2084]: 2026-01-23 17:32:46.552 [INFO][5047] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.3/26] IPv6=[] ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" HandleID="k8s-pod-network.bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Workload="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" Jan 23 17:32:46.574833 containerd[2084]: 2026-01-23 17:32:46.554 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Namespace="calico-system" Pod="goldmane-7c778bb748-kwnqz" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"3673ff07-a128-4686-9fb6-6fd2ab66f4db", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"", Pod:"goldmane-7c778bb748-kwnqz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacd656eb6c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:46.574958 containerd[2084]: 2026-01-23 17:32:46.554 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.3/32] ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Namespace="calico-system" Pod="goldmane-7c778bb748-kwnqz" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" Jan 23 17:32:46.574958 containerd[2084]: 2026-01-23 17:32:46.554 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacd656eb6c4 ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Namespace="calico-system" Pod="goldmane-7c778bb748-kwnqz" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" Jan 23 17:32:46.574958 containerd[2084]: 2026-01-23 17:32:46.556 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Namespace="calico-system" Pod="goldmane-7c778bb748-kwnqz" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" Jan 23 17:32:46.575064 containerd[2084]: 2026-01-23 17:32:46.557 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Namespace="calico-system" Pod="goldmane-7c778bb748-kwnqz" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"3673ff07-a128-4686-9fb6-6fd2ab66f4db", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d", Pod:"goldmane-7c778bb748-kwnqz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacd656eb6c4", MAC:"72:c6:37:56:8e:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:46.575114 containerd[2084]: 2026-01-23 17:32:46.570 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" Namespace="calico-system" Pod="goldmane-7c778bb748-kwnqz" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-goldmane--7c778bb748--kwnqz-eth0" Jan 23 17:32:46.613179 kubelet[3685]: E0123 17:32:46.613113 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:32:46.620213 containerd[2084]: time="2026-01-23T17:32:46.620165831Z" level=info msg="connecting to shim bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d" address="unix:///run/containerd/s/1ccd2ceb477a939919ba9512c1de6f3ed3ab3b3b6fd007242b6e8044be51771d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:46.660233 kernel: kauditd_printk_skb: 55 callbacks suppressed Jan 23 17:32:46.660368 kernel: audit: type=1325 audit(1769189566.645:625): table=filter:120 family=2 entries=22 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:46.645000 audit[5091]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:46.645000 audit[5091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd27ef2c0 a2=0 a3=1 items=0 ppid=3791 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.678365 kernel: audit: type=1300 audit(1769189566.645:625): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd27ef2c0 a2=0 a3=1 items=0 ppid=3791 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:46.686078 kernel: audit: type=1327 audit(1769189566.645:625): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:46.687253 systemd[1]: Started cri-containerd-bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d.scope - libcontainer container bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d. Jan 23 17:32:46.687000 audit[5091]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:46.687000 audit[5091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd27ef2c0 a2=0 a3=1 items=0 ppid=3791 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.715245 kernel: audit: type=1325 audit(1769189566.687:626): table=nat:121 family=2 entries=12 op=nft_register_rule pid=5091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:46.715376 kernel: audit: type=1300 audit(1769189566.687:626): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd27ef2c0 a2=0 a3=1 items=0 ppid=3791 pid=5091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:46.724053 kernel: audit: type=1327 audit(1769189566.687:626): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:46.725000 audit: BPF prog-id=209 op=LOAD Jan 23 17:32:46.730000 audit: BPF prog-id=210 op=LOAD Jan 23 17:32:46.735293 kernel: audit: type=1334 audit(1769189566.725:627): prog-id=209 op=LOAD Jan 23 17:32:46.735642 kernel: audit: type=1334 audit(1769189566.730:628): prog-id=210 op=LOAD Jan 23 17:32:46.735673 kernel: audit: type=1300 audit(1769189566.730:628): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5067 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.730000 audit[5080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5067 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333933643931613531323232376132363961653462353462373164 Jan 23 17:32:46.767418 kernel: audit: type=1327 audit(1769189566.730:628): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333933643931613531323232376132363961653462353462373164 Jan 23 17:32:46.730000 audit: BPF prog-id=210 op=UNLOAD Jan 23 17:32:46.730000 audit[5080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5067 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333933643931613531323232376132363961653462353462373164 Jan 23 17:32:46.732000 audit: BPF prog-id=211 op=LOAD Jan 23 17:32:46.732000 audit[5080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5067 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333933643931613531323232376132363961653462353462373164 Jan 23 17:32:46.735000 audit: BPF prog-id=212 op=LOAD Jan 23 17:32:46.735000 audit[5080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5067 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333933643931613531323232376132363961653462353462373164 Jan 23 17:32:46.750000 audit: BPF prog-id=212 op=UNLOAD Jan 23 17:32:46.750000 audit[5080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5067 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333933643931613531323232376132363961653462353462373164 Jan 23 17:32:46.750000 audit: BPF prog-id=211 op=UNLOAD Jan 23 17:32:46.750000 audit[5080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5067 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333933643931613531323232376132363961653462353462373164 Jan 23 17:32:46.750000 audit: BPF prog-id=213 op=LOAD Jan 23 17:32:46.750000 audit[5080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5067 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:46.750000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333933643931613531323232376132363961653462353462373164 Jan 23 17:32:46.789575 containerd[2084]: time="2026-01-23T17:32:46.789529185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-kwnqz,Uid:3673ff07-a128-4686-9fb6-6fd2ab66f4db,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf393d91a512227a269ae4b54b71d2fc5692b5808cfe7c78e1548fe8d685db0d\"" Jan 23 17:32:46.791700 containerd[2084]: time="2026-01-23T17:32:46.791666935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:32:46.873075 systemd-networkd[1662]: cali826f6d74620: Gained IPv6LL Jan 23 17:32:47.022627 containerd[2084]: time="2026-01-23T17:32:47.022487534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:47.025472 containerd[2084]: time="2026-01-23T17:32:47.025421791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:32:47.025567 containerd[2084]: time="2026-01-23T17:32:47.025523492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:47.025766 kubelet[3685]: E0123 17:32:47.025724 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:32:47.026061 kubelet[3685]: E0123 17:32:47.025772 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:32:47.026061 kubelet[3685]: E0123 17:32:47.025836 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kwnqz_calico-system(3673ff07-a128-4686-9fb6-6fd2ab66f4db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:47.026061 kubelet[3685]: E0123 17:32:47.025875 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:32:47.461742 containerd[2084]: time="2026-01-23T17:32:47.461695432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cnkb7,Uid:917f100a-145b-467d-9973-bba49cd20f34,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:47.465604 containerd[2084]: time="2026-01-23T17:32:47.465566019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55dddbdf7b-97cf9,Uid:e966f1d6-8ee3-4476-b957-9bae66b7553b,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:47.585652 systemd-networkd[1662]: cali132a20431d1: Link UP Jan 23 17:32:47.587728 systemd-networkd[1662]: cali132a20431d1: Gained carrier Jan 23 17:32:47.608384 containerd[2084]: 2026-01-23 17:32:47.494 [INFO][5128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:32:47.608384 containerd[2084]: 2026-01-23 17:32:47.504 [INFO][5128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0 coredns-66bc5c9577- kube-system 917f100a-145b-467d-9973-bba49cd20f34 842 0 2026-01-23 17:32:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.1.0-a-71c1b0067a coredns-66bc5c9577-cnkb7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali132a20431d1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Namespace="kube-system" Pod="coredns-66bc5c9577-cnkb7" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-" Jan 23 17:32:47.608384 containerd[2084]: 2026-01-23 17:32:47.505 [INFO][5128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Namespace="kube-system" Pod="coredns-66bc5c9577-cnkb7" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" Jan 23 17:32:47.608384 containerd[2084]: 2026-01-23 17:32:47.540 [INFO][5153] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" HandleID="k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Workload="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.540 [INFO][5153] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" HandleID="k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Workload="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.1.0-a-71c1b0067a", "pod":"coredns-66bc5c9577-cnkb7", "timestamp":"2026-01-23 17:32:47.540374021 +0000 UTC"}, Hostname:"ci-4547.1.0-a-71c1b0067a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.540 [INFO][5153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.540 [INFO][5153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.540 [INFO][5153] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-71c1b0067a' Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.548 [INFO][5153] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.554 [INFO][5153] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.558 [INFO][5153] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.560 [INFO][5153] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.608828 containerd[2084]: 2026-01-23 17:32:47.562 [INFO][5153] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.609012 containerd[2084]: 2026-01-23 17:32:47.562 [INFO][5153] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.609012 containerd[2084]: 2026-01-23 17:32:47.564 [INFO][5153] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2 Jan 23 17:32:47.609012 containerd[2084]: 2026-01-23 17:32:47.569 [INFO][5153] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.609012 containerd[2084]: 2026-01-23 17:32:47.578 [INFO][5153] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.4/26] block=192.168.117.0/26 handle="k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.609012 containerd[2084]: 2026-01-23 17:32:47.578 [INFO][5153] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.4/26] handle="k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.609012 containerd[2084]: 2026-01-23 17:32:47.578 [INFO][5153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:32:47.609012 containerd[2084]: 2026-01-23 17:32:47.578 [INFO][5153] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.4/26] IPv6=[] ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" HandleID="k8s-pod-network.2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Workload="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" Jan 23 17:32:47.609109 containerd[2084]: 2026-01-23 17:32:47.579 [INFO][5128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Namespace="kube-system" Pod="coredns-66bc5c9577-cnkb7" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"917f100a-145b-467d-9973-bba49cd20f34", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"", Pod:"coredns-66bc5c9577-cnkb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali132a20431d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:47.609109 containerd[2084]: 2026-01-23 17:32:47.579 [INFO][5128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.4/32] ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Namespace="kube-system" Pod="coredns-66bc5c9577-cnkb7" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" Jan 23 17:32:47.609109 containerd[2084]: 2026-01-23 17:32:47.579 [INFO][5128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali132a20431d1 ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Namespace="kube-system" Pod="coredns-66bc5c9577-cnkb7" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" Jan 23 17:32:47.609109 containerd[2084]: 2026-01-23 17:32:47.588 [INFO][5128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Namespace="kube-system" Pod="coredns-66bc5c9577-cnkb7" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" Jan 23 17:32:47.609109 containerd[2084]: 2026-01-23 17:32:47.591 [INFO][5128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Namespace="kube-system" Pod="coredns-66bc5c9577-cnkb7" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"917f100a-145b-467d-9973-bba49cd20f34", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2", Pod:"coredns-66bc5c9577-cnkb7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali132a20431d1", MAC:"0a:a9:c4:77:02:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:47.609233 containerd[2084]: 2026-01-23 17:32:47.606 [INFO][5128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" Namespace="kube-system" Pod="coredns-66bc5c9577-cnkb7" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--cnkb7-eth0" Jan 23 17:32:47.615367 kubelet[3685]: E0123 17:32:47.614743 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:32:47.616666 kubelet[3685]: E0123 17:32:47.616561 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:32:47.652349 containerd[2084]: time="2026-01-23T17:32:47.652293770Z" level=info msg="connecting to shim 2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2" address="unix:///run/containerd/s/648626523f0c77987b8130746a17ba2a550508baf64287e882e0682f9ccd6658" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:47.680121 systemd[1]: Started cri-containerd-2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2.scope - libcontainer container 2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2. Jan 23 17:32:47.690000 audit: BPF prog-id=214 op=LOAD Jan 23 17:32:47.690000 audit: BPF prog-id=215 op=LOAD Jan 23 17:32:47.690000 audit[5195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5183 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663766313932313833386238313738353930326564303961623336 Jan 23 17:32:47.690000 audit: BPF prog-id=215 op=UNLOAD Jan 23 17:32:47.690000 audit[5195]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663766313932313833386238313738353930326564303961623336 Jan 23 17:32:47.691000 audit: BPF prog-id=216 op=LOAD Jan 23 17:32:47.691000 audit[5195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5183 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663766313932313833386238313738353930326564303961623336 Jan 23 17:32:47.691000 audit: BPF prog-id=217 op=LOAD Jan 23 17:32:47.691000 audit[5195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5183 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663766313932313833386238313738353930326564303961623336 Jan 23 17:32:47.691000 audit: BPF prog-id=217 op=UNLOAD Jan 23 17:32:47.691000 audit[5195]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663766313932313833386238313738353930326564303961623336 Jan 23 17:32:47.691000 audit: BPF prog-id=216 op=UNLOAD Jan 23 17:32:47.691000 audit[5195]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663766313932313833386238313738353930326564303961623336 Jan 23 17:32:47.691000 audit: BPF prog-id=218 op=LOAD Jan 23 17:32:47.691000 audit[5195]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5183 pid=5195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262663766313932313833386238313738353930326564303961623336 Jan 23 17:32:47.708308 systemd-networkd[1662]: cali770b386838b: Link UP Jan 23 17:32:47.708428 systemd-networkd[1662]: cali770b386838b: Gained carrier Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.508 [INFO][5140] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.522 [INFO][5140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0 calico-kube-controllers-55dddbdf7b- calico-system e966f1d6-8ee3-4476-b957-9bae66b7553b 845 0 2026-01-23 17:32:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55dddbdf7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.1.0-a-71c1b0067a calico-kube-controllers-55dddbdf7b-97cf9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali770b386838b [] [] }} ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Namespace="calico-system" Pod="calico-kube-controllers-55dddbdf7b-97cf9" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.522 [INFO][5140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Namespace="calico-system" Pod="calico-kube-controllers-55dddbdf7b-97cf9" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.557 [INFO][5160] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" HandleID="k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.557 [INFO][5160] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" HandleID="k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-71c1b0067a", "pod":"calico-kube-controllers-55dddbdf7b-97cf9", "timestamp":"2026-01-23 17:32:47.557509424 +0000 UTC"}, Hostname:"ci-4547.1.0-a-71c1b0067a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.558 [INFO][5160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.578 [INFO][5160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.578 [INFO][5160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-71c1b0067a' Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.650 [INFO][5160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.661 [INFO][5160] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.669 [INFO][5160] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.671 [INFO][5160] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.675 [INFO][5160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.675 [INFO][5160] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.683 [INFO][5160] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.689 [INFO][5160] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.703 [INFO][5160] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.5/26] block=192.168.117.0/26 handle="k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.703 [INFO][5160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.5/26] handle="k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.703 [INFO][5160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:32:47.729769 containerd[2084]: 2026-01-23 17:32:47.703 [INFO][5160] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.5/26] IPv6=[] ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" HandleID="k8s-pod-network.13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" Jan 23 17:32:47.730310 containerd[2084]: 2026-01-23 17:32:47.706 [INFO][5140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Namespace="calico-system" Pod="calico-kube-controllers-55dddbdf7b-97cf9" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0", GenerateName:"calico-kube-controllers-55dddbdf7b-", Namespace:"calico-system", SelfLink:"", UID:"e966f1d6-8ee3-4476-b957-9bae66b7553b", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55dddbdf7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"", Pod:"calico-kube-controllers-55dddbdf7b-97cf9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali770b386838b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:47.730310 containerd[2084]: 2026-01-23 17:32:47.706 [INFO][5140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.5/32] ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Namespace="calico-system" Pod="calico-kube-controllers-55dddbdf7b-97cf9" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" Jan 23 17:32:47.730310 containerd[2084]: 2026-01-23 17:32:47.706 [INFO][5140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali770b386838b ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Namespace="calico-system" Pod="calico-kube-controllers-55dddbdf7b-97cf9" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" Jan 23 17:32:47.730310 containerd[2084]: 2026-01-23 17:32:47.708 [INFO][5140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Namespace="calico-system" Pod="calico-kube-controllers-55dddbdf7b-97cf9" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" Jan 23 17:32:47.730310 containerd[2084]: 2026-01-23 17:32:47.709 [INFO][5140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Namespace="calico-system" Pod="calico-kube-controllers-55dddbdf7b-97cf9" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0", GenerateName:"calico-kube-controllers-55dddbdf7b-", Namespace:"calico-system", SelfLink:"", UID:"e966f1d6-8ee3-4476-b957-9bae66b7553b", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55dddbdf7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac", Pod:"calico-kube-controllers-55dddbdf7b-97cf9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali770b386838b", MAC:"c2:f8:11:ae:6c:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:47.730310 containerd[2084]: 2026-01-23 17:32:47.723 [INFO][5140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" Namespace="calico-system" Pod="calico-kube-controllers-55dddbdf7b-97cf9" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--kube--controllers--55dddbdf7b--97cf9-eth0" Jan 23 17:32:47.737000 audit[5224]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=5224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:47.737000 audit[5224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff9e1c250 a2=0 a3=1 items=0 ppid=3791 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:47.738000 audit[5224]: NETFILTER_CFG table=nat:123 family=2 entries=12 op=nft_register_rule pid=5224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:47.738000 audit[5224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff9e1c250 a2=0 a3=1 items=0 ppid=3791 pid=5224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:47.742246 containerd[2084]: time="2026-01-23T17:32:47.742214734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cnkb7,Uid:917f100a-145b-467d-9973-bba49cd20f34,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2\"" Jan 23 17:32:47.750403 containerd[2084]: time="2026-01-23T17:32:47.750360894Z" level=info msg="CreateContainer within sandbox \"2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:32:47.785312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792581160.mount: Deactivated successfully. Jan 23 17:32:47.789416 containerd[2084]: time="2026-01-23T17:32:47.788960515Z" level=info msg="Container ea7cc92a08705efe909ca2257327ebf535c05c35b71062faff7e5e18d395af9c: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:47.789416 containerd[2084]: time="2026-01-23T17:32:47.789026270Z" level=info msg="connecting to shim 13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac" address="unix:///run/containerd/s/c1081b0a3db991e6a28a08ad5e039be19252b9f6dce890a1c0bfc8dd8f6fbe57" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:47.802565 containerd[2084]: time="2026-01-23T17:32:47.802411052Z" level=info msg="CreateContainer within sandbox \"2bf7f1921838b81785902ed09ab368bd76bb4c2c53cfdc2243e2c36d050b7bb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea7cc92a08705efe909ca2257327ebf535c05c35b71062faff7e5e18d395af9c\"" Jan 23 17:32:47.803811 containerd[2084]: time="2026-01-23T17:32:47.803130980Z" level=info msg="StartContainer for \"ea7cc92a08705efe909ca2257327ebf535c05c35b71062faff7e5e18d395af9c\"" Jan 23 17:32:47.804164 containerd[2084]: time="2026-01-23T17:32:47.804111751Z" level=info msg="connecting to shim ea7cc92a08705efe909ca2257327ebf535c05c35b71062faff7e5e18d395af9c" address="unix:///run/containerd/s/648626523f0c77987b8130746a17ba2a550508baf64287e882e0682f9ccd6658" protocol=ttrpc version=3 Jan 23 17:32:47.806207 systemd[1]: Started cri-containerd-13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac.scope - libcontainer container 13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac. Jan 23 17:32:47.827046 systemd[1]: Started cri-containerd-ea7cc92a08705efe909ca2257327ebf535c05c35b71062faff7e5e18d395af9c.scope - libcontainer container ea7cc92a08705efe909ca2257327ebf535c05c35b71062faff7e5e18d395af9c. Jan 23 17:32:47.828000 audit: BPF prog-id=219 op=LOAD Jan 23 17:32:47.829000 audit: BPF prog-id=220 op=LOAD Jan 23 17:32:47.829000 audit[5250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5239 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653261663935303266343632326335333864323564373261306139 Jan 23 17:32:47.829000 audit: BPF prog-id=220 op=UNLOAD Jan 23 17:32:47.829000 audit[5250]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5239 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.829000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653261663935303266343632326335333864323564373261306139 Jan 23 17:32:47.830000 audit: BPF prog-id=221 op=LOAD Jan 23 17:32:47.830000 audit[5250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5239 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653261663935303266343632326335333864323564373261306139 Jan 23 17:32:47.830000 audit: BPF prog-id=222 op=LOAD Jan 23 17:32:47.830000 audit[5250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5239 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653261663935303266343632326335333864323564373261306139 Jan 23 17:32:47.830000 audit: BPF prog-id=222 op=UNLOAD Jan 23 17:32:47.830000 audit[5250]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5239 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653261663935303266343632326335333864323564373261306139 Jan 23 17:32:47.830000 audit: BPF prog-id=221 op=UNLOAD Jan 23 17:32:47.830000 audit[5250]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5239 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653261663935303266343632326335333864323564373261306139 Jan 23 17:32:47.830000 audit: BPF prog-id=223 op=LOAD Jan 23 17:32:47.830000 audit[5250]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5239 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.830000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133653261663935303266343632326335333864323564373261306139 Jan 23 17:32:47.839000 audit: BPF prog-id=224 op=LOAD Jan 23 17:32:47.841000 audit: BPF prog-id=225 op=LOAD Jan 23 17:32:47.841000 audit[5262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5183 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561376363393261303837303565666539303963613232353733323765 Jan 23 17:32:47.841000 audit: BPF prog-id=225 op=UNLOAD Jan 23 17:32:47.841000 audit[5262]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561376363393261303837303565666539303963613232353733323765 Jan 23 17:32:47.841000 audit: BPF prog-id=226 op=LOAD Jan 23 17:32:47.841000 audit[5262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5183 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561376363393261303837303565666539303963613232353733323765 Jan 23 17:32:47.841000 audit: BPF prog-id=227 op=LOAD Jan 23 17:32:47.841000 audit[5262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5183 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561376363393261303837303565666539303963613232353733323765 Jan 23 17:32:47.841000 audit: BPF prog-id=227 op=UNLOAD Jan 23 17:32:47.841000 audit[5262]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561376363393261303837303565666539303963613232353733323765 Jan 23 17:32:47.841000 audit: BPF prog-id=226 op=UNLOAD Jan 23 17:32:47.841000 audit[5262]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5183 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561376363393261303837303565666539303963613232353733323765 Jan 23 17:32:47.841000 audit: BPF prog-id=228 op=LOAD Jan 23 17:32:47.841000 audit[5262]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5183 pid=5262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:47.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561376363393261303837303565666539303963613232353733323765 Jan 23 17:32:47.864160 containerd[2084]: time="2026-01-23T17:32:47.864012264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55dddbdf7b-97cf9,Uid:e966f1d6-8ee3-4476-b957-9bae66b7553b,Namespace:calico-system,Attempt:0,} returns sandbox id \"13e2af9502f4622c538d25d72a0a911f29357236d09bea827fad7448bea3d6ac\"" Jan 23 17:32:47.866752 containerd[2084]: time="2026-01-23T17:32:47.866472660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:32:47.872373 containerd[2084]: time="2026-01-23T17:32:47.872334039Z" level=info msg="StartContainer for \"ea7cc92a08705efe909ca2257327ebf535c05c35b71062faff7e5e18d395af9c\" returns successfully" Jan 23 17:32:48.132591 containerd[2084]: time="2026-01-23T17:32:48.132515220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:48.135696 containerd[2084]: time="2026-01-23T17:32:48.135633646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:32:48.135907 containerd[2084]: time="2026-01-23T17:32:48.135669967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:48.136178 kubelet[3685]: E0123 17:32:48.136141 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:32:48.136575 kubelet[3685]: E0123 17:32:48.136187 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:32:48.136575 kubelet[3685]: E0123 17:32:48.136246 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-55dddbdf7b-97cf9_calico-system(e966f1d6-8ee3-4476-b957-9bae66b7553b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:48.136575 kubelet[3685]: E0123 17:32:48.136272 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:32:48.462205 containerd[2084]: time="2026-01-23T17:32:48.461907477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb74bcbb-d6mpj,Uid:29070044-7a78-4c22-ba4e-b03de4973ab6,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:32:48.466453 containerd[2084]: time="2026-01-23T17:32:48.466389891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sf7ng,Uid:7fedd3f3-53a6-42e6-a84b-32923d7910c8,Namespace:kube-system,Attempt:0,}" Jan 23 17:32:48.473160 systemd-networkd[1662]: caliacd656eb6c4: Gained IPv6LL Jan 23 17:32:48.582956 systemd-networkd[1662]: calieede4f16df1: Link UP Jan 23 17:32:48.584285 systemd-networkd[1662]: calieede4f16df1: Gained carrier Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.494 [INFO][5330] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.508 [INFO][5330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0 calico-apiserver-85fb74bcbb- calico-apiserver 29070044-7a78-4c22-ba4e-b03de4973ab6 841 0 2026-01-23 17:32:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85fb74bcbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-a-71c1b0067a calico-apiserver-85fb74bcbb-d6mpj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieede4f16df1 [] [] }} ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-d6mpj" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.508 [INFO][5330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-d6mpj" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.540 [INFO][5354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" HandleID="k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.540 [INFO][5354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" HandleID="k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-a-71c1b0067a", "pod":"calico-apiserver-85fb74bcbb-d6mpj", "timestamp":"2026-01-23 17:32:48.540195384 +0000 UTC"}, Hostname:"ci-4547.1.0-a-71c1b0067a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.540 [INFO][5354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.540 [INFO][5354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.540 [INFO][5354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-71c1b0067a' Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.549 [INFO][5354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.553 [INFO][5354] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.557 [INFO][5354] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.558 [INFO][5354] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.560 [INFO][5354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.560 [INFO][5354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.562 [INFO][5354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670 Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.566 [INFO][5354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.574 [INFO][5354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.6/26] block=192.168.117.0/26 handle="k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.575 [INFO][5354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.6/26] handle="k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.575 [INFO][5354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:32:48.600725 containerd[2084]: 2026-01-23 17:32:48.575 [INFO][5354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.6/26] IPv6=[] ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" HandleID="k8s-pod-network.65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Workload="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" Jan 23 17:32:48.601267 containerd[2084]: 2026-01-23 17:32:48.577 [INFO][5330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-d6mpj" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0", GenerateName:"calico-apiserver-85fb74bcbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"29070044-7a78-4c22-ba4e-b03de4973ab6", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85fb74bcbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"", Pod:"calico-apiserver-85fb74bcbb-d6mpj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieede4f16df1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:48.601267 containerd[2084]: 2026-01-23 17:32:48.577 [INFO][5330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.6/32] ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-d6mpj" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" Jan 23 17:32:48.601267 containerd[2084]: 2026-01-23 17:32:48.577 [INFO][5330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieede4f16df1 ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-d6mpj" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" Jan 23 17:32:48.601267 containerd[2084]: 2026-01-23 17:32:48.584 [INFO][5330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-d6mpj" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" Jan 23 17:32:48.601267 containerd[2084]: 2026-01-23 17:32:48.585 [INFO][5330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-d6mpj" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0", GenerateName:"calico-apiserver-85fb74bcbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"29070044-7a78-4c22-ba4e-b03de4973ab6", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85fb74bcbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670", Pod:"calico-apiserver-85fb74bcbb-d6mpj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieede4f16df1", MAC:"52:b7:db:76:5d:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:48.601267 containerd[2084]: 2026-01-23 17:32:48.598 [INFO][5330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" Namespace="calico-apiserver" Pod="calico-apiserver-85fb74bcbb-d6mpj" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-calico--apiserver--85fb74bcbb--d6mpj-eth0" Jan 23 17:32:48.619830 kubelet[3685]: E0123 17:32:48.619789 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:32:48.623184 kubelet[3685]: E0123 17:32:48.623156 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:32:48.647161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1054334482.mount: Deactivated successfully. Jan 23 17:32:48.662778 containerd[2084]: time="2026-01-23T17:32:48.662725162Z" level=info msg="connecting to shim 65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670" address="unix:///run/containerd/s/d05abd91ae8a3500edf0792b3c69dde20d604405927fedca065f21bde87c2985" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:48.700269 kubelet[3685]: I0123 17:32:48.700212 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cnkb7" podStartSLOduration=38.699831685 podStartE2EDuration="38.699831685s" podCreationTimestamp="2026-01-23 17:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:48.699047387 +0000 UTC m=+44.328593682" watchObservedRunningTime="2026-01-23 17:32:48.699831685 +0000 UTC m=+44.329377972" Jan 23 17:32:48.701520 systemd[1]: Started cri-containerd-65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670.scope - libcontainer container 65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670. Jan 23 17:32:48.717000 audit: BPF prog-id=229 op=LOAD Jan 23 17:32:48.718000 audit: BPF prog-id=230 op=LOAD Jan 23 17:32:48.718000 audit[5396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5385 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635633832663735353539333861346631346331306432653165316130 Jan 23 17:32:48.718000 audit: BPF prog-id=230 op=UNLOAD Jan 23 17:32:48.718000 audit[5396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635633832663735353539333861346631346331306432653165316130 Jan 23 17:32:48.718000 audit: BPF prog-id=231 op=LOAD Jan 23 17:32:48.718000 audit[5396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5385 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635633832663735353539333861346631346331306432653165316130 Jan 23 17:32:48.719000 audit: BPF prog-id=232 op=LOAD Jan 23 17:32:48.719000 audit[5396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5385 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635633832663735353539333861346631346331306432653165316130 Jan 23 17:32:48.719000 audit: BPF prog-id=232 op=UNLOAD Jan 23 17:32:48.719000 audit[5396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635633832663735353539333861346631346331306432653165316130 Jan 23 17:32:48.719000 audit: BPF prog-id=231 op=UNLOAD Jan 23 17:32:48.719000 audit[5396]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635633832663735353539333861346631346331306432653165316130 Jan 23 17:32:48.719000 audit: BPF prog-id=233 op=LOAD Jan 23 17:32:48.719000 audit[5396]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5385 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635633832663735353539333861346631346331306432653165316130 Jan 23 17:32:48.731565 systemd-networkd[1662]: calic9f0d45e202: Link UP Jan 23 17:32:48.733529 systemd-networkd[1662]: calic9f0d45e202: Gained carrier Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.511 [INFO][5344] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.523 [INFO][5344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0 coredns-66bc5c9577- kube-system 7fedd3f3-53a6-42e6-a84b-32923d7910c8 838 0 2026-01-23 17:32:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.1.0-a-71c1b0067a coredns-66bc5c9577-sf7ng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic9f0d45e202 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-sf7ng" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.523 [INFO][5344] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-sf7ng" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.550 [INFO][5360] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" HandleID="k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Workload="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.550 [INFO][5360] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" HandleID="k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Workload="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.1.0-a-71c1b0067a", "pod":"coredns-66bc5c9577-sf7ng", "timestamp":"2026-01-23 17:32:48.55072676 +0000 UTC"}, Hostname:"ci-4547.1.0-a-71c1b0067a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.550 [INFO][5360] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.575 [INFO][5360] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.575 [INFO][5360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-71c1b0067a' Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.656 [INFO][5360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.679 [INFO][5360] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.687 [INFO][5360] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.691 [INFO][5360] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.698 [INFO][5360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.699 [INFO][5360] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.704 [INFO][5360] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3 Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.713 [INFO][5360] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.722 [INFO][5360] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.7/26] block=192.168.117.0/26 handle="k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.723 [INFO][5360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.7/26] handle="k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.723 [INFO][5360] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:32:48.759261 containerd[2084]: 2026-01-23 17:32:48.723 [INFO][5360] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.7/26] IPv6=[] ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" HandleID="k8s-pod-network.11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Workload="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" Jan 23 17:32:48.759726 containerd[2084]: 2026-01-23 17:32:48.726 [INFO][5344] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-sf7ng" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7fedd3f3-53a6-42e6-a84b-32923d7910c8", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"", Pod:"coredns-66bc5c9577-sf7ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9f0d45e202", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:48.759726 containerd[2084]: 2026-01-23 17:32:48.726 [INFO][5344] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.7/32] ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-sf7ng" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" Jan 23 17:32:48.759726 containerd[2084]: 2026-01-23 17:32:48.726 [INFO][5344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9f0d45e202 ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-sf7ng" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" Jan 23 17:32:48.759726 containerd[2084]: 2026-01-23 17:32:48.733 [INFO][5344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-sf7ng" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" Jan 23 17:32:48.759726 containerd[2084]: 2026-01-23 17:32:48.734 [INFO][5344] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-sf7ng" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7fedd3f3-53a6-42e6-a84b-32923d7910c8", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3", Pod:"coredns-66bc5c9577-sf7ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic9f0d45e202", MAC:"0e:ed:28:d2:e0:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:48.760509 containerd[2084]: 2026-01-23 17:32:48.754 [INFO][5344] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" Namespace="kube-system" Pod="coredns-66bc5c9577-sf7ng" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-coredns--66bc5c9577--sf7ng-eth0" Jan 23 17:32:48.761000 audit[5424]: NETFILTER_CFG table=filter:124 family=2 entries=22 op=nft_register_rule pid=5424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:48.761000 audit[5424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe3699bd0 a2=0 a3=1 items=0 ppid=3791 pid=5424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:48.763675 containerd[2084]: time="2026-01-23T17:32:48.763632522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85fb74bcbb-d6mpj,Uid:29070044-7a78-4c22-ba4e-b03de4973ab6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"65c82f7555938a4f14c10d2e1e1a0e058205cdb1c4d7c67a109c59788b796670\"" Jan 23 17:32:48.767215 containerd[2084]: time="2026-01-23T17:32:48.767016543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:32:48.765000 audit[5424]: NETFILTER_CFG table=nat:125 family=2 entries=12 op=nft_register_rule pid=5424 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:48.765000 audit[5424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe3699bd0 a2=0 a3=1 items=0 ppid=3791 pid=5424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:48.812607 containerd[2084]: time="2026-01-23T17:32:48.812281259Z" level=info msg="connecting to shim 11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3" address="unix:///run/containerd/s/769b55eab0d0f6a79e949c1f9033b5f1ddaf3c4c2e9c7ec250acb5cef25b2c16" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:48.837106 systemd[1]: Started cri-containerd-11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3.scope - libcontainer container 11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3. Jan 23 17:32:48.846000 audit: BPF prog-id=234 op=LOAD Jan 23 17:32:48.846000 audit: BPF prog-id=235 op=LOAD Jan 23 17:32:48.846000 audit[5453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5439 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131643164663735646339656538396266343836373932663832663164 Jan 23 17:32:48.846000 audit: BPF prog-id=235 op=UNLOAD Jan 23 17:32:48.846000 audit[5453]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5439 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131643164663735646339656538396266343836373932663832663164 Jan 23 17:32:48.846000 audit: BPF prog-id=236 op=LOAD Jan 23 17:32:48.846000 audit[5453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5439 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131643164663735646339656538396266343836373932663832663164 Jan 23 17:32:48.846000 audit: BPF prog-id=237 op=LOAD Jan 23 17:32:48.846000 audit[5453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5439 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131643164663735646339656538396266343836373932663832663164 Jan 23 17:32:48.846000 audit: BPF prog-id=237 op=UNLOAD Jan 23 17:32:48.846000 audit[5453]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5439 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131643164663735646339656538396266343836373932663832663164 Jan 23 17:32:48.846000 audit: BPF prog-id=236 op=UNLOAD Jan 23 17:32:48.846000 audit[5453]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5439 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131643164663735646339656538396266343836373932663832663164 Jan 23 17:32:48.846000 audit: BPF prog-id=238 op=LOAD Jan 23 17:32:48.846000 audit[5453]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5439 pid=5453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131643164663735646339656538396266343836373932663832663164 Jan 23 17:32:48.876701 containerd[2084]: time="2026-01-23T17:32:48.876658977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sf7ng,Uid:7fedd3f3-53a6-42e6-a84b-32923d7910c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3\"" Jan 23 17:32:48.885563 containerd[2084]: time="2026-01-23T17:32:48.885481070Z" level=info msg="CreateContainer within sandbox \"11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:32:48.901982 containerd[2084]: time="2026-01-23T17:32:48.901933235Z" level=info msg="Container 24c6775e2781de1068052d3626fc601b8edd9c22e35597b03884a4c5fced0a55: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:32:48.915110 containerd[2084]: time="2026-01-23T17:32:48.915070102Z" level=info msg="CreateContainer within sandbox \"11d1df75dc9ee89bf486792f82f1d78f22b51302d0c2afca69a44875211dc7c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24c6775e2781de1068052d3626fc601b8edd9c22e35597b03884a4c5fced0a55\"" Jan 23 17:32:48.915839 containerd[2084]: time="2026-01-23T17:32:48.915810255Z" level=info msg="StartContainer for \"24c6775e2781de1068052d3626fc601b8edd9c22e35597b03884a4c5fced0a55\"" Jan 23 17:32:48.918029 containerd[2084]: time="2026-01-23T17:32:48.917969350Z" level=info msg="connecting to shim 24c6775e2781de1068052d3626fc601b8edd9c22e35597b03884a4c5fced0a55" address="unix:///run/containerd/s/769b55eab0d0f6a79e949c1f9033b5f1ddaf3c4c2e9c7ec250acb5cef25b2c16" protocol=ttrpc version=3 Jan 23 17:32:48.934044 systemd[1]: Started cri-containerd-24c6775e2781de1068052d3626fc601b8edd9c22e35597b03884a4c5fced0a55.scope - libcontainer container 24c6775e2781de1068052d3626fc601b8edd9c22e35597b03884a4c5fced0a55. Jan 23 17:32:48.943000 audit: BPF prog-id=239 op=LOAD Jan 23 17:32:48.943000 audit: BPF prog-id=240 op=LOAD Jan 23 17:32:48.943000 audit[5478]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5439 pid=5478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633637373565323738316465313036383035326433363236666336 Jan 23 17:32:48.943000 audit: BPF prog-id=240 op=UNLOAD Jan 23 17:32:48.943000 audit[5478]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5439 pid=5478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633637373565323738316465313036383035326433363236666336 Jan 23 17:32:48.943000 audit: BPF prog-id=241 op=LOAD Jan 23 17:32:48.943000 audit[5478]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5439 pid=5478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633637373565323738316465313036383035326433363236666336 Jan 23 17:32:48.943000 audit: BPF prog-id=242 op=LOAD Jan 23 17:32:48.943000 audit[5478]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5439 pid=5478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633637373565323738316465313036383035326433363236666336 Jan 23 17:32:48.943000 audit: BPF prog-id=242 op=UNLOAD Jan 23 17:32:48.943000 audit[5478]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5439 pid=5478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633637373565323738316465313036383035326433363236666336 Jan 23 17:32:48.943000 audit: BPF prog-id=241 op=UNLOAD Jan 23 17:32:48.943000 audit[5478]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5439 pid=5478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633637373565323738316465313036383035326433363236666336 Jan 23 17:32:48.943000 audit: BPF prog-id=243 op=LOAD Jan 23 17:32:48.943000 audit[5478]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5439 pid=5478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:48.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234633637373565323738316465313036383035326433363236666336 Jan 23 17:32:48.966281 containerd[2084]: time="2026-01-23T17:32:48.966233510Z" level=info msg="StartContainer for \"24c6775e2781de1068052d3626fc601b8edd9c22e35597b03884a4c5fced0a55\" returns successfully" Jan 23 17:32:48.985069 systemd-networkd[1662]: cali770b386838b: Gained IPv6LL Jan 23 17:32:49.007937 containerd[2084]: time="2026-01-23T17:32:49.007884737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:49.011106 containerd[2084]: time="2026-01-23T17:32:49.011001523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:32:49.011106 containerd[2084]: time="2026-01-23T17:32:49.011052669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:49.011449 kubelet[3685]: E0123 17:32:49.011391 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:32:49.011509 kubelet[3685]: E0123 17:32:49.011451 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:32:49.011545 kubelet[3685]: E0123 17:32:49.011524 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85fb74bcbb-d6mpj_calico-apiserver(29070044-7a78-4c22-ba4e-b03de4973ab6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:49.011569 kubelet[3685]: E0123 17:32:49.011554 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:32:49.497097 systemd-networkd[1662]: cali132a20431d1: Gained IPv6LL Jan 23 17:32:49.629317 kubelet[3685]: E0123 17:32:49.629242 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:32:49.631315 kubelet[3685]: E0123 17:32:49.630593 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:32:49.643883 kubelet[3685]: I0123 17:32:49.642055 3685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sf7ng" podStartSLOduration=39.642041493 podStartE2EDuration="39.642041493s" podCreationTimestamp="2026-01-23 17:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:32:49.641931424 +0000 UTC m=+45.271477735" watchObservedRunningTime="2026-01-23 17:32:49.642041493 +0000 UTC m=+45.271587780" Jan 23 17:32:49.799000 audit[5528]: NETFILTER_CFG table=filter:126 family=2 entries=19 op=nft_register_rule pid=5528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:49.799000 audit[5528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcbbc7f80 a2=0 a3=1 items=0 ppid=3791 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:49.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:49.817051 systemd-networkd[1662]: calic9f0d45e202: Gained IPv6LL Jan 23 17:32:49.819000 audit[5528]: NETFILTER_CFG table=nat:127 family=2 entries=45 op=nft_register_chain pid=5528 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:49.819000 audit[5528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19092 a0=3 a1=ffffcbbc7f80 a2=0 a3=1 items=0 ppid=3791 pid=5528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:49.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:49.933958 kubelet[3685]: I0123 17:32:49.933909 3685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:32:50.257000 audit: BPF prog-id=244 op=LOAD Jan 23 17:32:50.257000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0d373c8 a2=98 a3=fffff0d373b8 items=0 ppid=5532 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.257000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:32:50.258000 audit: BPF prog-id=244 op=UNLOAD Jan 23 17:32:50.258000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff0d37398 a3=0 items=0 ppid=5532 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.258000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:32:50.258000 audit: BPF prog-id=245 op=LOAD Jan 23 17:32:50.258000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0d37278 a2=74 a3=95 items=0 ppid=5532 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.258000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:32:50.258000 audit: BPF prog-id=245 op=UNLOAD Jan 23 17:32:50.258000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5532 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.258000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:32:50.258000 audit: BPF prog-id=246 op=LOAD Jan 23 17:32:50.258000 audit[5549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0d372a8 a2=40 a3=fffff0d372d8 items=0 ppid=5532 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.258000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:32:50.258000 audit: BPF prog-id=246 op=UNLOAD Jan 23 17:32:50.258000 audit[5549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff0d372d8 items=0 ppid=5532 pid=5549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.258000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 23 17:32:50.260000 audit: BPF prog-id=247 op=LOAD Jan 23 17:32:50.260000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3a52ab8 a2=98 a3=fffff3a52aa8 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.260000 audit: BPF prog-id=247 op=UNLOAD Jan 23 17:32:50.260000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff3a52a88 a3=0 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.260000 audit: BPF prog-id=248 op=LOAD Jan 23 17:32:50.260000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3a52748 a2=74 a3=95 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.260000 audit: BPF prog-id=248 op=UNLOAD Jan 23 17:32:50.260000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.260000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.261000 audit: BPF prog-id=249 op=LOAD Jan 23 17:32:50.261000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3a527a8 a2=94 a3=2 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.261000 audit: BPF prog-id=249 op=UNLOAD Jan 23 17:32:50.261000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.344000 audit: BPF prog-id=250 op=LOAD Jan 23 17:32:50.344000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3a52768 a2=40 a3=fffff3a52798 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.344000 audit: BPF prog-id=250 op=UNLOAD Jan 23 17:32:50.344000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff3a52798 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.344000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.356000 audit: BPF prog-id=251 op=LOAD Jan 23 17:32:50.356000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff3a52778 a2=94 a3=4 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.356000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.357000 audit: BPF prog-id=251 op=UNLOAD Jan 23 17:32:50.357000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.357000 audit: BPF prog-id=252 op=LOAD Jan 23 17:32:50.357000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff3a525b8 a2=94 a3=5 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.357000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.358000 audit: BPF prog-id=252 op=UNLOAD Jan 23 17:32:50.358000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.358000 audit: BPF prog-id=253 op=LOAD Jan 23 17:32:50.358000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff3a527e8 a2=94 a3=6 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.358000 audit: BPF prog-id=253 op=UNLOAD Jan 23 17:32:50.358000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.358000 audit: BPF prog-id=254 op=LOAD Jan 23 17:32:50.358000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff3a51fb8 a2=94 a3=83 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.358000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.359000 audit: BPF prog-id=255 op=LOAD Jan 23 17:32:50.359000 audit[5550]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff3a51d78 a2=94 a3=2 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.359000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.359000 audit: BPF prog-id=255 op=UNLOAD Jan 23 17:32:50.359000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.359000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.360000 audit: BPF prog-id=254 op=UNLOAD Jan 23 17:32:50.360000 audit[5550]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=ea4d620 a3=ea40b00 items=0 ppid=5532 pid=5550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.360000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 23 17:32:50.371000 audit: BPF prog-id=256 op=LOAD Jan 23 17:32:50.371000 audit[5559]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbac4778 a2=98 a3=ffffcbac4768 items=0 ppid=5532 pid=5559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:32:50.371000 audit: BPF prog-id=256 op=UNLOAD Jan 23 17:32:50.371000 audit[5559]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcbac4748 a3=0 items=0 ppid=5532 pid=5559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:32:50.371000 audit: BPF prog-id=257 op=LOAD Jan 23 17:32:50.371000 audit[5559]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbac4628 a2=74 a3=95 items=0 ppid=5532 pid=5559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:32:50.371000 audit: BPF prog-id=257 op=UNLOAD Jan 23 17:32:50.371000 audit[5559]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5532 pid=5559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:32:50.371000 audit: BPF prog-id=258 op=LOAD Jan 23 17:32:50.371000 audit[5559]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbac4658 a2=40 a3=ffffcbac4688 items=0 ppid=5532 pid=5559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:32:50.371000 audit: BPF prog-id=258 op=UNLOAD Jan 23 17:32:50.371000 audit[5559]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffcbac4688 items=0 ppid=5532 pid=5559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.371000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 23 17:32:50.393120 systemd-networkd[1662]: calieede4f16df1: Gained IPv6LL Jan 23 17:32:50.457499 systemd-networkd[1662]: vxlan.calico: Link UP Jan 23 17:32:50.457507 systemd-networkd[1662]: vxlan.calico: Gained carrier Jan 23 17:32:50.465740 containerd[2084]: time="2026-01-23T17:32:50.464774938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8t56,Uid:bea6d6d6-6443-4534-ac1b-26cecad019a7,Namespace:calico-system,Attempt:0,}" Jan 23 17:32:50.595000 audit: BPF prog-id=259 op=LOAD Jan 23 17:32:50.595000 audit[5618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0a1a6e8 a2=98 a3=fffff0a1a6d8 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.595000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.598000 audit: BPF prog-id=259 op=UNLOAD Jan 23 17:32:50.598000 audit[5618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff0a1a6b8 a3=0 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.598000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.605322 systemd-networkd[1662]: calidde043f3499: Link UP Jan 23 17:32:50.606000 audit: BPF prog-id=260 op=LOAD Jan 23 17:32:50.606000 audit[5618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0a1a3c8 a2=74 a3=95 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.606000 audit: BPF prog-id=260 op=UNLOAD Jan 23 17:32:50.606000 audit[5618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.606000 audit: BPF prog-id=261 op=LOAD Jan 23 17:32:50.606000 audit[5618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0a1a428 a2=94 a3=2 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.606000 audit: BPF prog-id=261 op=UNLOAD Jan 23 17:32:50.606000 audit[5618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.606000 audit: BPF prog-id=262 op=LOAD Jan 23 17:32:50.606000 audit[5618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff0a1a2a8 a2=40 a3=fffff0a1a2d8 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.606000 audit: BPF prog-id=262 op=UNLOAD Jan 23 17:32:50.606000 audit[5618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffff0a1a2d8 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.606000 audit: BPF prog-id=263 op=LOAD Jan 23 17:32:50.606000 audit[5618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff0a1a3f8 a2=94 a3=b7 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.606000 audit: BPF prog-id=263 op=UNLOAD Jan 23 17:32:50.606000 audit[5618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.606000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.607000 audit: BPF prog-id=264 op=LOAD Jan 23 17:32:50.607000 audit[5618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff0a19aa8 a2=94 a3=2 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.607000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.608000 audit: BPF prog-id=264 op=UNLOAD Jan 23 17:32:50.608000 audit[5618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.608000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.608000 audit: BPF prog-id=265 op=LOAD Jan 23 17:32:50.608000 audit[5618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff0a19c38 a2=94 a3=30 items=0 ppid=5532 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.608000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 23 17:32:50.607007 systemd-networkd[1662]: calidde043f3499: Gained carrier Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.511 [INFO][5585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0 csi-node-driver- calico-system bea6d6d6-6443-4534-ac1b-26cecad019a7 733 0 2026-01-23 17:32:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.1.0-a-71c1b0067a csi-node-driver-v8t56 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidde043f3499 [] [] }} ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Namespace="calico-system" Pod="csi-node-driver-v8t56" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.512 [INFO][5585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Namespace="calico-system" Pod="csi-node-driver-v8t56" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.540 [INFO][5597] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" HandleID="k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Workload="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.540 [INFO][5597] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" HandleID="k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Workload="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b3b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-a-71c1b0067a", "pod":"csi-node-driver-v8t56", "timestamp":"2026-01-23 17:32:50.540299436 +0000 UTC"}, Hostname:"ci-4547.1.0-a-71c1b0067a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.540 [INFO][5597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.540 [INFO][5597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.540 [INFO][5597] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-a-71c1b0067a' Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.546 [INFO][5597] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.553 [INFO][5597] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.557 [INFO][5597] ipam/ipam.go 511: Trying affinity for 192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.559 [INFO][5597] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.562 [INFO][5597] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.562 [INFO][5597] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.563 [INFO][5597] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.571 [INFO][5597] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.590 [INFO][5597] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.8/26] block=192.168.117.0/26 handle="k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.590 [INFO][5597] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.8/26] handle="k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" host="ci-4547.1.0-a-71c1b0067a" Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.590 [INFO][5597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:32:50.629369 containerd[2084]: 2026-01-23 17:32:50.590 [INFO][5597] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.8/26] IPv6=[] ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" HandleID="k8s-pod-network.e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Workload="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" Jan 23 17:32:50.629819 containerd[2084]: 2026-01-23 17:32:50.596 [INFO][5585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Namespace="calico-system" Pod="csi-node-driver-v8t56" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bea6d6d6-6443-4534-ac1b-26cecad019a7", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"", Pod:"csi-node-driver-v8t56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidde043f3499", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:50.629819 containerd[2084]: 2026-01-23 17:32:50.597 [INFO][5585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.8/32] ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Namespace="calico-system" Pod="csi-node-driver-v8t56" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" Jan 23 17:32:50.629819 containerd[2084]: 2026-01-23 17:32:50.598 [INFO][5585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidde043f3499 ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Namespace="calico-system" Pod="csi-node-driver-v8t56" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" Jan 23 17:32:50.629819 containerd[2084]: 2026-01-23 17:32:50.606 [INFO][5585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Namespace="calico-system" Pod="csi-node-driver-v8t56" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" Jan 23 17:32:50.629819 containerd[2084]: 2026-01-23 17:32:50.609 [INFO][5585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Namespace="calico-system" Pod="csi-node-driver-v8t56" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bea6d6d6-6443-4534-ac1b-26cecad019a7", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-a-71c1b0067a", ContainerID:"e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d", Pod:"csi-node-driver-v8t56", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidde043f3499", MAC:"2a:5f:19:eb:4b:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:32:50.629819 containerd[2084]: 2026-01-23 17:32:50.624 [INFO][5585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" Namespace="calico-system" Pod="csi-node-driver-v8t56" WorkloadEndpoint="ci--4547.1.0--a--71c1b0067a-k8s-csi--node--driver--v8t56-eth0" Jan 23 17:32:50.633000 audit: BPF prog-id=266 op=LOAD Jan 23 17:32:50.633000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffd319488 a2=98 a3=fffffd319478 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.633000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.634000 audit: BPF prog-id=266 op=UNLOAD Jan 23 17:32:50.634000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffd319458 a3=0 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.634000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.635000 audit: BPF prog-id=267 op=LOAD Jan 23 17:32:50.635000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffd319118 a2=74 a3=95 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.635000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.635000 audit: BPF prog-id=267 op=UNLOAD Jan 23 17:32:50.635000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.635000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.636000 audit: BPF prog-id=268 op=LOAD Jan 23 17:32:50.636000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffd319178 a2=94 a3=2 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.636000 audit: BPF prog-id=268 op=UNLOAD Jan 23 17:32:50.636000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.636000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.642098 kubelet[3685]: E0123 17:32:50.641793 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:32:50.683991 containerd[2084]: time="2026-01-23T17:32:50.683614874Z" level=info msg="connecting to shim e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d" address="unix:///run/containerd/s/837e1c013bc46c55c2ffea149a7b17478ea8f28722802286874d760341b0544b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:32:50.714074 systemd[1]: Started cri-containerd-e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d.scope - libcontainer container e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d. Jan 23 17:32:50.728000 audit: BPF prog-id=269 op=LOAD Jan 23 17:32:50.729000 audit: BPF prog-id=270 op=LOAD Jan 23 17:32:50.729000 audit[5649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5638 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623239306665633839623566646339646633626335663134376365 Jan 23 17:32:50.729000 audit: BPF prog-id=270 op=UNLOAD Jan 23 17:32:50.729000 audit[5649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623239306665633839623566646339646633626335663134376365 Jan 23 17:32:50.729000 audit: BPF prog-id=271 op=LOAD Jan 23 17:32:50.729000 audit[5649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5638 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623239306665633839623566646339646633626335663134376365 Jan 23 17:32:50.729000 audit: BPF prog-id=272 op=LOAD Jan 23 17:32:50.729000 audit[5649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5638 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623239306665633839623566646339646633626335663134376365 Jan 23 17:32:50.729000 audit: BPF prog-id=272 op=UNLOAD Jan 23 17:32:50.729000 audit[5649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623239306665633839623566646339646633626335663134376365 Jan 23 17:32:50.729000 audit: BPF prog-id=271 op=UNLOAD Jan 23 17:32:50.729000 audit[5649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5638 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623239306665633839623566646339646633626335663134376365 Jan 23 17:32:50.729000 audit: BPF prog-id=273 op=LOAD Jan 23 17:32:50.729000 audit[5649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5638 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532623239306665633839623566646339646633626335663134376365 Jan 23 17:32:50.752163 containerd[2084]: time="2026-01-23T17:32:50.751533052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8t56,Uid:bea6d6d6-6443-4534-ac1b-26cecad019a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2b290fec89b5fdc9df3bc5f147ce842215b45dd449dbb60d827aeabe6c8868d\"" Jan 23 17:32:50.755342 containerd[2084]: time="2026-01-23T17:32:50.755307090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:32:50.802000 audit: BPF prog-id=274 op=LOAD Jan 23 17:32:50.802000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffd319138 a2=40 a3=fffffd319168 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.802000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.803000 audit: BPF prog-id=274 op=UNLOAD Jan 23 17:32:50.803000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffd319168 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.803000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.809000 audit: BPF prog-id=275 op=LOAD Jan 23 17:32:50.809000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffd319148 a2=94 a3=4 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.809000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.809000 audit: BPF prog-id=275 op=UNLOAD Jan 23 17:32:50.809000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.809000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.809000 audit: BPF prog-id=276 op=LOAD Jan 23 17:32:50.809000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffd318f88 a2=94 a3=5 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.809000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.809000 audit: BPF prog-id=276 op=UNLOAD Jan 23 17:32:50.809000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.809000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.809000 audit: BPF prog-id=277 op=LOAD Jan 23 17:32:50.809000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffd3191b8 a2=94 a3=6 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.809000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.809000 audit: BPF prog-id=277 op=UNLOAD Jan 23 17:32:50.809000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.809000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.809000 audit: BPF prog-id=278 op=LOAD Jan 23 17:32:50.809000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffd318988 a2=94 a3=83 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.809000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.810000 audit: BPF prog-id=279 op=LOAD Jan 23 17:32:50.810000 audit[5622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffd318748 a2=94 a3=2 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.810000 audit: BPF prog-id=279 op=UNLOAD Jan 23 17:32:50.810000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.810000 audit: BPF prog-id=278 op=UNLOAD Jan 23 17:32:50.810000 audit[5622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=28c8f620 a3=28c82b00 items=0 ppid=5532 pid=5622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.810000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 23 17:32:50.818000 audit: BPF prog-id=265 op=UNLOAD Jan 23 17:32:50.818000 audit[5532]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4001352040 a2=0 a3=0 items=0 ppid=4806 pid=5532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.818000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 23 17:32:50.832000 audit[5700]: NETFILTER_CFG table=filter:128 family=2 entries=15 op=nft_register_rule pid=5700 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:50.832000 audit[5700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe57a2350 a2=0 a3=1 items=0 ppid=3791 pid=5700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:50.836000 audit[5700]: NETFILTER_CFG table=nat:129 family=2 entries=25 op=nft_register_chain pid=5700 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:32:50.836000 audit[5700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffe57a2350 a2=0 a3=1 items=0 ppid=3791 pid=5700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:32:50.913000 audit[5727]: NETFILTER_CFG table=mangle:130 family=2 entries=16 op=nft_register_chain pid=5727 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:32:50.913000 audit[5727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc315a9c0 a2=0 a3=ffffa441bfa8 items=0 ppid=5532 pid=5727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.913000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:32:50.914000 audit[5725]: NETFILTER_CFG table=nat:131 family=2 entries=15 op=nft_register_chain pid=5725 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:32:50.914000 audit[5725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffedf64f70 a2=0 a3=ffff8a026fa8 items=0 ppid=5532 pid=5725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.914000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:32:50.933000 audit[5726]: NETFILTER_CFG table=raw:132 family=2 entries=21 op=nft_register_chain pid=5726 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:32:50.933000 audit[5726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff772bfb0 a2=0 a3=ffff87c83fa8 items=0 ppid=5532 pid=5726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.933000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:32:50.960000 audit[5730]: NETFILTER_CFG table=filter:133 family=2 entries=293 op=nft_register_chain pid=5730 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:32:50.960000 audit[5730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=173940 a0=3 a1=ffffcb2b5bd0 a2=0 a3=ffff841c6fa8 items=0 ppid=5532 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.960000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:32:50.980000 audit[5741]: NETFILTER_CFG table=filter:134 family=2 entries=56 op=nft_register_chain pid=5741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 23 17:32:50.980000 audit[5741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=ffffe1b7ae60 a2=0 a3=ffffb559ffa8 items=0 ppid=5532 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:32:50.980000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 23 17:32:51.084451 containerd[2084]: time="2026-01-23T17:32:51.084394525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:51.087944 containerd[2084]: time="2026-01-23T17:32:51.087738601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:32:51.088061 containerd[2084]: time="2026-01-23T17:32:51.087883895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:51.088616 kubelet[3685]: E0123 17:32:51.088207 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:32:51.088616 kubelet[3685]: E0123 17:32:51.088249 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:32:51.088616 kubelet[3685]: E0123 17:32:51.088319 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:51.089731 containerd[2084]: time="2026-01-23T17:32:51.089599579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:32:51.480589 containerd[2084]: time="2026-01-23T17:32:51.480532332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:51.484406 containerd[2084]: time="2026-01-23T17:32:51.484211727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:32:51.484406 containerd[2084]: time="2026-01-23T17:32:51.484354413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:51.484570 kubelet[3685]: E0123 17:32:51.484526 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:32:51.484606 kubelet[3685]: E0123 17:32:51.484570 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:32:51.484656 kubelet[3685]: E0123 17:32:51.484632 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:51.484699 kubelet[3685]: E0123 17:32:51.484668 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:51.644575 kubelet[3685]: E0123 17:32:51.644496 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:51.737085 systemd-networkd[1662]: vxlan.calico: Gained IPv6LL Jan 23 17:32:52.569024 systemd-networkd[1662]: calidde043f3499: Gained IPv6LL Jan 23 17:32:52.645683 kubelet[3685]: E0123 17:32:52.645613 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:32:57.458621 containerd[2084]: time="2026-01-23T17:32:57.458110388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:32:57.716140 containerd[2084]: time="2026-01-23T17:32:57.715967757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:57.718907 containerd[2084]: time="2026-01-23T17:32:57.718833471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:32:57.719115 containerd[2084]: time="2026-01-23T17:32:57.718877849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:57.719210 kubelet[3685]: E0123 17:32:57.719158 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:32:57.719500 kubelet[3685]: E0123 17:32:57.719218 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:32:57.719500 kubelet[3685]: E0123 17:32:57.719292 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-844d8cf486-rp7bn_calico-system(c0c6a4bb-e851-48c7-afc7-8d5b88a4086b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:57.720887 containerd[2084]: time="2026-01-23T17:32:57.720837362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:32:57.958006 containerd[2084]: time="2026-01-23T17:32:57.957951159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:57.961072 containerd[2084]: time="2026-01-23T17:32:57.961022810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:32:57.961158 containerd[2084]: time="2026-01-23T17:32:57.961121078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:57.961356 kubelet[3685]: E0123 17:32:57.961316 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:32:57.961414 kubelet[3685]: E0123 17:32:57.961369 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:32:57.961466 kubelet[3685]: E0123 17:32:57.961446 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-844d8cf486-rp7bn_calico-system(c0c6a4bb-e851-48c7-afc7-8d5b88a4086b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:57.961526 kubelet[3685]: E0123 17:32:57.961484 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:32:59.458100 containerd[2084]: time="2026-01-23T17:32:59.458057801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:32:59.760769 containerd[2084]: time="2026-01-23T17:32:59.760627357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:32:59.763941 containerd[2084]: time="2026-01-23T17:32:59.763891817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:32:59.764135 containerd[2084]: time="2026-01-23T17:32:59.763993213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:32:59.764407 kubelet[3685]: E0123 17:32:59.764295 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:32:59.764407 kubelet[3685]: E0123 17:32:59.764350 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:32:59.765769 kubelet[3685]: E0123 17:32:59.764785 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85fb74bcbb-xldvx_calico-apiserver(ce786634-1bb7-4148-9461-d169f302e50f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:32:59.765769 kubelet[3685]: E0123 17:32:59.765724 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:33:00.461858 containerd[2084]: time="2026-01-23T17:33:00.461794141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:33:00.750943 containerd[2084]: time="2026-01-23T17:33:00.750669164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:00.754197 containerd[2084]: time="2026-01-23T17:33:00.754055509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:33:00.754197 containerd[2084]: time="2026-01-23T17:33:00.754152978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:00.754539 kubelet[3685]: E0123 17:33:00.754487 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:33:00.754666 kubelet[3685]: E0123 17:33:00.754648 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:33:00.754929 kubelet[3685]: E0123 17:33:00.754859 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-55dddbdf7b-97cf9_calico-system(e966f1d6-8ee3-4476-b957-9bae66b7553b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:00.754929 kubelet[3685]: E0123 17:33:00.754891 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:33:01.457281 containerd[2084]: time="2026-01-23T17:33:01.457205400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:33:01.705103 containerd[2084]: time="2026-01-23T17:33:01.704898572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:01.707727 containerd[2084]: time="2026-01-23T17:33:01.707568582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:33:01.707727 containerd[2084]: time="2026-01-23T17:33:01.707612864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:01.708997 kubelet[3685]: E0123 17:33:01.708005 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:33:01.708997 kubelet[3685]: E0123 17:33:01.708048 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:33:01.708997 kubelet[3685]: E0123 17:33:01.708110 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kwnqz_calico-system(3673ff07-a128-4686-9fb6-6fd2ab66f4db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:01.708997 kubelet[3685]: E0123 17:33:01.708136 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:33:03.457558 containerd[2084]: time="2026-01-23T17:33:03.457380573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:33:03.718404 containerd[2084]: time="2026-01-23T17:33:03.718008608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:03.721163 containerd[2084]: time="2026-01-23T17:33:03.721042916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:33:03.721163 containerd[2084]: time="2026-01-23T17:33:03.721059732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:03.721498 kubelet[3685]: E0123 17:33:03.721441 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:03.721498 kubelet[3685]: E0123 17:33:03.721491 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:03.721955 kubelet[3685]: E0123 17:33:03.721560 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85fb74bcbb-d6mpj_calico-apiserver(29070044-7a78-4c22-ba4e-b03de4973ab6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:03.721955 kubelet[3685]: E0123 17:33:03.721587 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:33:05.458082 containerd[2084]: time="2026-01-23T17:33:05.458023168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:33:05.786007 containerd[2084]: time="2026-01-23T17:33:05.785867391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:05.788986 containerd[2084]: time="2026-01-23T17:33:05.788877698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:33:05.788986 containerd[2084]: time="2026-01-23T17:33:05.788929708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:05.789308 kubelet[3685]: E0123 17:33:05.789258 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:33:05.789653 kubelet[3685]: E0123 17:33:05.789308 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:33:05.789653 kubelet[3685]: E0123 17:33:05.789382 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:05.791149 containerd[2084]: time="2026-01-23T17:33:05.791119899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:33:06.034708 containerd[2084]: time="2026-01-23T17:33:06.034654158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:06.037788 containerd[2084]: time="2026-01-23T17:33:06.037683977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:33:06.037873 containerd[2084]: time="2026-01-23T17:33:06.037791926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:06.038375 kubelet[3685]: E0123 17:33:06.038178 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:33:06.038375 kubelet[3685]: E0123 17:33:06.038231 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:33:06.038375 kubelet[3685]: E0123 17:33:06.038306 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:06.038375 kubelet[3685]: E0123 17:33:06.038338 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:33:10.458910 kubelet[3685]: E0123 17:33:10.458485 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:33:12.459028 kubelet[3685]: E0123 17:33:12.458078 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:33:13.457394 kubelet[3685]: E0123 17:33:13.457342 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:33:14.457615 kubelet[3685]: E0123 17:33:14.457551 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:33:14.457615 kubelet[3685]: E0123 17:33:14.457572 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:33:17.458193 kubelet[3685]: E0123 17:33:17.458134 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:33:21.458896 containerd[2084]: time="2026-01-23T17:33:21.458739838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:33:21.705202 containerd[2084]: time="2026-01-23T17:33:21.705132226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:21.708175 containerd[2084]: time="2026-01-23T17:33:21.708110777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:33:21.709067 containerd[2084]: time="2026-01-23T17:33:21.708195821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:21.709111 kubelet[3685]: E0123 17:33:21.708339 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:33:21.709111 kubelet[3685]: E0123 17:33:21.708395 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:33:21.709111 kubelet[3685]: E0123 17:33:21.708470 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-844d8cf486-rp7bn_calico-system(c0c6a4bb-e851-48c7-afc7-8d5b88a4086b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:21.710623 containerd[2084]: time="2026-01-23T17:33:21.710590524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:33:21.980907 containerd[2084]: time="2026-01-23T17:33:21.980743789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:21.985115 containerd[2084]: time="2026-01-23T17:33:21.985056953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:33:21.985656 containerd[2084]: time="2026-01-23T17:33:21.985150517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:21.985702 kubelet[3685]: E0123 17:33:21.985398 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:33:21.985702 kubelet[3685]: E0123 17:33:21.985457 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:33:21.985702 kubelet[3685]: E0123 17:33:21.985555 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-844d8cf486-rp7bn_calico-system(c0c6a4bb-e851-48c7-afc7-8d5b88a4086b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:21.985989 kubelet[3685]: E0123 17:33:21.985943 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:33:25.457490 containerd[2084]: time="2026-01-23T17:33:25.457448075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:33:25.771001 containerd[2084]: time="2026-01-23T17:33:25.770868307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:25.774417 containerd[2084]: time="2026-01-23T17:33:25.774370599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:33:25.774549 containerd[2084]: time="2026-01-23T17:33:25.774470019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:25.774760 kubelet[3685]: E0123 17:33:25.774699 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:33:25.775180 kubelet[3685]: E0123 17:33:25.774856 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:33:25.775718 kubelet[3685]: E0123 17:33:25.775589 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kwnqz_calico-system(3673ff07-a128-4686-9fb6-6fd2ab66f4db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:25.775718 kubelet[3685]: E0123 17:33:25.775689 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:33:26.459080 containerd[2084]: time="2026-01-23T17:33:26.458706442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:33:26.746717 containerd[2084]: time="2026-01-23T17:33:26.746437205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:26.749647 containerd[2084]: time="2026-01-23T17:33:26.749592944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:33:26.749767 containerd[2084]: time="2026-01-23T17:33:26.749634922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:26.750062 kubelet[3685]: E0123 17:33:26.750023 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:33:26.750181 kubelet[3685]: E0123 17:33:26.750167 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:33:26.750753 kubelet[3685]: E0123 17:33:26.750729 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-55dddbdf7b-97cf9_calico-system(e966f1d6-8ee3-4476-b957-9bae66b7553b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:26.751242 kubelet[3685]: E0123 17:33:26.751208 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:33:28.459487 containerd[2084]: time="2026-01-23T17:33:28.459448022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:33:28.721979 containerd[2084]: time="2026-01-23T17:33:28.721660720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:28.725422 containerd[2084]: time="2026-01-23T17:33:28.725230203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:33:28.725422 containerd[2084]: time="2026-01-23T17:33:28.725239515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:28.725773 kubelet[3685]: E0123 17:33:28.725717 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:28.725773 kubelet[3685]: E0123 17:33:28.725766 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:28.726317 kubelet[3685]: E0123 17:33:28.725839 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85fb74bcbb-xldvx_calico-apiserver(ce786634-1bb7-4148-9461-d169f302e50f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:28.726317 kubelet[3685]: E0123 17:33:28.725923 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:33:29.459879 containerd[2084]: time="2026-01-23T17:33:29.458704657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:33:29.733463 containerd[2084]: time="2026-01-23T17:33:29.733329902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:29.737103 containerd[2084]: time="2026-01-23T17:33:29.737044775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:33:29.737377 containerd[2084]: time="2026-01-23T17:33:29.737138442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:29.737414 kubelet[3685]: E0123 17:33:29.737343 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:29.737414 kubelet[3685]: E0123 17:33:29.737386 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:33:29.738938 kubelet[3685]: E0123 17:33:29.737452 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85fb74bcbb-d6mpj_calico-apiserver(29070044-7a78-4c22-ba4e-b03de4973ab6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:29.738938 kubelet[3685]: E0123 17:33:29.737478 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:33:31.458503 containerd[2084]: time="2026-01-23T17:33:31.458459447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:33:31.712421 containerd[2084]: time="2026-01-23T17:33:31.712285450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:31.715603 containerd[2084]: time="2026-01-23T17:33:31.715551570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:33:31.715603 containerd[2084]: time="2026-01-23T17:33:31.715555770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:31.715850 kubelet[3685]: E0123 17:33:31.715801 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:33:31.716139 kubelet[3685]: E0123 17:33:31.715867 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:33:31.716139 kubelet[3685]: E0123 17:33:31.715938 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:31.717833 containerd[2084]: time="2026-01-23T17:33:31.717598409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:33:31.989566 containerd[2084]: time="2026-01-23T17:33:31.989439178Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:33:31.993139 containerd[2084]: time="2026-01-23T17:33:31.993023429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:33:31.993139 containerd[2084]: time="2026-01-23T17:33:31.993080600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:33:31.993302 kubelet[3685]: E0123 17:33:31.993269 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:33:31.993336 kubelet[3685]: E0123 17:33:31.993314 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:33:31.993729 kubelet[3685]: E0123 17:33:31.993376 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:33:31.993729 kubelet[3685]: E0123 17:33:31.993412 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:33:33.457660 kubelet[3685]: E0123 17:33:33.457549 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:33:38.457535 kubelet[3685]: E0123 17:33:38.457193 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:33:40.457800 kubelet[3685]: E0123 17:33:40.457537 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:33:42.459191 kubelet[3685]: E0123 17:33:42.459062 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:33:43.456768 kubelet[3685]: E0123 17:33:43.456704 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:33:46.460715 kubelet[3685]: E0123 17:33:46.460489 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:33:48.460629 kubelet[3685]: E0123 17:33:48.460227 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:33:49.457860 kubelet[3685]: E0123 17:33:49.457801 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:33:51.456671 kubelet[3685]: E0123 17:33:51.456522 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:33:56.460937 kubelet[3685]: E0123 17:33:56.460621 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:33:57.457860 kubelet[3685]: E0123 17:33:57.457655 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:33:58.460523 kubelet[3685]: E0123 17:33:58.460418 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:33:59.457304 kubelet[3685]: E0123 17:33:59.457207 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:34:04.465549 kubelet[3685]: E0123 17:34:04.465143 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:34:06.460381 kubelet[3685]: E0123 17:34:06.460276 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:34:06.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.22:22-10.200.16.10:47122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:06.500592 systemd[1]: Started sshd@7-10.200.20.22:22-10.200.16.10:47122.service - OpenSSH per-connection server daemon (10.200.16.10:47122). Jan 23 17:34:06.504929 kernel: kauditd_printk_skb: 397 callbacks suppressed Jan 23 17:34:06.505035 kernel: audit: type=1130 audit(1769189646.499:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.22:22-10.200.16.10:47122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:06.943000 audit[5853]: USER_ACCT pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:06.949120 sshd[5853]: Accepted publickey for core from 10.200.16.10 port 47122 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:06.962096 kernel: audit: type=1101 audit(1769189646.943:767): pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:06.960000 audit[5853]: CRED_ACQ pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:06.965702 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:06.986929 kernel: audit: type=1103 audit(1769189646.960:768): pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:06.988013 kernel: audit: type=1006 audit(1769189646.960:769): pid=5853 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 23 17:34:06.960000 audit[5853]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6532780 a2=3 a3=0 items=0 ppid=1 pid=5853 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:07.005156 kernel: audit: type=1300 audit(1769189646.960:769): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6532780 a2=3 a3=0 items=0 ppid=1 pid=5853 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:06.960000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:07.012441 kernel: audit: type=1327 audit(1769189646.960:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:07.008102 systemd-logind[2051]: New session 11 of user core. Jan 23 17:34:07.018055 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:34:07.020000 audit[5853]: USER_START pid=5853 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:07.041000 audit[5857]: CRED_ACQ pid=5857 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:07.056985 kernel: audit: type=1105 audit(1769189647.020:770): pid=5853 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:07.057124 kernel: audit: type=1103 audit(1769189647.041:771): pid=5857 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:07.309982 sshd[5857]: Connection closed by 10.200.16.10 port 47122 Jan 23 17:34:07.310549 sshd-session[5853]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:07.310000 audit[5853]: USER_END pid=5853 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:07.332755 systemd[1]: sshd@7-10.200.20.22:22-10.200.16.10:47122.service: Deactivated successfully. Jan 23 17:34:07.310000 audit[5853]: CRED_DISP pid=5853 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:07.336274 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:34:07.346725 kernel: audit: type=1106 audit(1769189647.310:772): pid=5853 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:07.347657 kernel: audit: type=1104 audit(1769189647.310:773): pid=5853 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:07.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.22:22-10.200.16.10:47122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:07.349266 systemd-logind[2051]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:34:07.350466 systemd-logind[2051]: Removed session 11. Jan 23 17:34:07.457541 kubelet[3685]: E0123 17:34:07.457208 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:34:11.457955 containerd[2084]: time="2026-01-23T17:34:11.457680070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:34:11.713216 containerd[2084]: time="2026-01-23T17:34:11.712930472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:11.716039 containerd[2084]: time="2026-01-23T17:34:11.715982679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:34:11.716039 containerd[2084]: time="2026-01-23T17:34:11.715986423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:11.716270 kubelet[3685]: E0123 17:34:11.716227 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:34:11.716576 kubelet[3685]: E0123 17:34:11.716275 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:34:11.716576 kubelet[3685]: E0123 17:34:11.716341 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-844d8cf486-rp7bn_calico-system(c0c6a4bb-e851-48c7-afc7-8d5b88a4086b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:11.718019 containerd[2084]: time="2026-01-23T17:34:11.717919741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:34:12.008184 containerd[2084]: time="2026-01-23T17:34:12.008053216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:12.011683 containerd[2084]: time="2026-01-23T17:34:12.011532470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:34:12.011683 containerd[2084]: time="2026-01-23T17:34:12.011637186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:12.012078 kubelet[3685]: E0123 17:34:12.012035 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:34:12.012176 kubelet[3685]: E0123 17:34:12.012085 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:34:12.012176 kubelet[3685]: E0123 17:34:12.012152 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-844d8cf486-rp7bn_calico-system(c0c6a4bb-e851-48c7-afc7-8d5b88a4086b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:12.012328 kubelet[3685]: E0123 17:34:12.012191 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:34:12.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.22:22-10.200.16.10:36384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:12.399760 systemd[1]: Started sshd@8-10.200.20.22:22-10.200.16.10:36384.service - OpenSSH per-connection server daemon (10.200.16.10:36384). Jan 23 17:34:12.402829 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:34:12.402920 kernel: audit: type=1130 audit(1769189652.399:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.22:22-10.200.16.10:36384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:12.457171 containerd[2084]: time="2026-01-23T17:34:12.456923674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:34:12.700955 containerd[2084]: time="2026-01-23T17:34:12.700244468Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:12.705872 containerd[2084]: time="2026-01-23T17:34:12.704303135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:34:12.705872 containerd[2084]: time="2026-01-23T17:34:12.704352249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:12.706215 kubelet[3685]: E0123 17:34:12.706168 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:12.706313 kubelet[3685]: E0123 17:34:12.706220 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:12.706313 kubelet[3685]: E0123 17:34:12.706293 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85fb74bcbb-d6mpj_calico-apiserver(29070044-7a78-4c22-ba4e-b03de4973ab6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:12.706353 kubelet[3685]: E0123 17:34:12.706319 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:34:12.842876 sshd[5881]: Accepted publickey for core from 10.200.16.10 port 36384 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:12.841000 audit[5881]: USER_ACCT pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:12.845213 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:12.843000 audit[5881]: CRED_ACQ pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:12.878507 kernel: audit: type=1101 audit(1769189652.841:776): pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:12.878617 kernel: audit: type=1103 audit(1769189652.843:777): pid=5881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:12.889144 kernel: audit: type=1006 audit(1769189652.843:778): pid=5881 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 23 17:34:12.843000 audit[5881]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8f16ac0 a2=3 a3=0 items=0 ppid=1 pid=5881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:12.890912 systemd-logind[2051]: New session 12 of user core. Jan 23 17:34:12.906718 kernel: audit: type=1300 audit(1769189652.843:778): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8f16ac0 a2=3 a3=0 items=0 ppid=1 pid=5881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:12.843000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:12.913624 kernel: audit: type=1327 audit(1769189652.843:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:12.916090 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:34:12.975000 audit[5881]: USER_START pid=5881 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:12.995000 audit[5912]: CRED_ACQ pid=5912 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:13.010618 kernel: audit: type=1105 audit(1769189652.975:779): pid=5881 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:13.010749 kernel: audit: type=1103 audit(1769189652.995:780): pid=5912 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:13.224766 sshd[5912]: Connection closed by 10.200.16.10 port 36384 Jan 23 17:34:13.225957 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:13.227000 audit[5881]: USER_END pid=5881 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:13.230000 audit[5881]: CRED_DISP pid=5881 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:13.247378 systemd[1]: sshd@8-10.200.20.22:22-10.200.16.10:36384.service: Deactivated successfully. Jan 23 17:34:13.251950 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:34:13.255166 systemd-logind[2051]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:34:13.260784 systemd-logind[2051]: Removed session 12. Jan 23 17:34:13.261620 kernel: audit: type=1106 audit(1769189653.227:781): pid=5881 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:13.261679 kernel: audit: type=1104 audit(1769189653.230:782): pid=5881 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:13.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.22:22-10.200.16.10:36384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:13.457587 containerd[2084]: time="2026-01-23T17:34:13.457387259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:34:13.754446 containerd[2084]: time="2026-01-23T17:34:13.754371262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:13.757916 containerd[2084]: time="2026-01-23T17:34:13.757819307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:34:13.757916 containerd[2084]: time="2026-01-23T17:34:13.757879741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:13.758458 kubelet[3685]: E0123 17:34:13.758158 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:34:13.758458 kubelet[3685]: E0123 17:34:13.758233 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:34:13.758458 kubelet[3685]: E0123 17:34:13.758321 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:13.759694 containerd[2084]: time="2026-01-23T17:34:13.759512544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:34:14.044257 containerd[2084]: time="2026-01-23T17:34:14.044126194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:14.047229 containerd[2084]: time="2026-01-23T17:34:14.047188785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:34:14.047340 containerd[2084]: time="2026-01-23T17:34:14.047275741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:14.047557 kubelet[3685]: E0123 17:34:14.047509 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:34:14.047617 kubelet[3685]: E0123 17:34:14.047562 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:34:14.047650 kubelet[3685]: E0123 17:34:14.047629 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v8t56_calico-system(bea6d6d6-6443-4534-ac1b-26cecad019a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:14.047695 kubelet[3685]: E0123 17:34:14.047659 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:34:17.458642 containerd[2084]: time="2026-01-23T17:34:17.458336616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:34:17.724302 containerd[2084]: time="2026-01-23T17:34:17.724000048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:17.727580 containerd[2084]: time="2026-01-23T17:34:17.727436892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:34:17.727580 containerd[2084]: time="2026-01-23T17:34:17.727536087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:17.728377 kubelet[3685]: E0123 17:34:17.727933 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:34:17.728377 kubelet[3685]: E0123 17:34:17.727978 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:34:17.728377 kubelet[3685]: E0123 17:34:17.728045 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-kwnqz_calico-system(3673ff07-a128-4686-9fb6-6fd2ab66f4db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:17.728377 kubelet[3685]: E0123 17:34:17.728074 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:34:18.314823 systemd[1]: Started sshd@9-10.200.20.22:22-10.200.16.10:36400.service - OpenSSH per-connection server daemon (10.200.16.10:36400). Jan 23 17:34:18.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.22:22-10.200.16.10:36400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:18.318922 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:34:18.318989 kernel: audit: type=1130 audit(1769189658.314:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.22:22-10.200.16.10:36400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:18.746000 audit[5924]: USER_ACCT pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:18.763077 sshd[5924]: Accepted publickey for core from 10.200.16.10 port 36400 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:18.764457 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:18.762000 audit[5924]: CRED_ACQ pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:18.780538 kernel: audit: type=1101 audit(1769189658.746:785): pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:18.780634 kernel: audit: type=1103 audit(1769189658.762:786): pid=5924 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:18.785437 systemd-logind[2051]: New session 13 of user core. Jan 23 17:34:18.788342 kernel: audit: type=1006 audit(1769189658.762:787): pid=5924 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 23 17:34:18.762000 audit[5924]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffcf349d0 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:18.811571 kernel: audit: type=1300 audit(1769189658.762:787): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffcf349d0 a2=3 a3=0 items=0 ppid=1 pid=5924 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:18.812863 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:34:18.762000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:18.821276 kernel: audit: type=1327 audit(1769189658.762:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:18.823000 audit[5924]: USER_START pid=5924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:18.843000 audit[5928]: CRED_ACQ pid=5928 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:18.857643 kernel: audit: type=1105 audit(1769189658.823:788): pid=5924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:18.857765 kernel: audit: type=1103 audit(1769189658.843:789): pid=5928 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.055071 sshd[5928]: Connection closed by 10.200.16.10 port 36400 Jan 23 17:34:19.056057 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:19.058000 audit[5924]: USER_END pid=5924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.064756 systemd[1]: sshd@9-10.200.20.22:22-10.200.16.10:36400.service: Deactivated successfully. Jan 23 17:34:19.071052 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:34:19.084931 systemd-logind[2051]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:34:19.086231 systemd-logind[2051]: Removed session 13. Jan 23 17:34:19.058000 audit[5924]: CRED_DISP pid=5924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.100897 kernel: audit: type=1106 audit(1769189659.058:790): pid=5924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.101034 kernel: audit: type=1104 audit(1769189659.058:791): pid=5924 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.22:22-10.200.16.10:36400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:19.146052 systemd[1]: Started sshd@10-10.200.20.22:22-10.200.16.10:36406.service - OpenSSH per-connection server daemon (10.200.16.10:36406). Jan 23 17:34:19.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.22:22-10.200.16.10:36406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:19.570000 audit[5941]: USER_ACCT pid=5941 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.571768 sshd[5941]: Accepted publickey for core from 10.200.16.10 port 36406 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:19.572000 audit[5941]: CRED_ACQ pid=5941 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.572000 audit[5941]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd98f1660 a2=3 a3=0 items=0 ppid=1 pid=5941 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:19.572000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:19.573813 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:19.580818 systemd-logind[2051]: New session 14 of user core. Jan 23 17:34:19.585072 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:34:19.588000 audit[5941]: USER_START pid=5941 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.589000 audit[5945]: CRED_ACQ pid=5945 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.885972 sshd[5945]: Connection closed by 10.200.16.10 port 36406 Jan 23 17:34:19.887150 sshd-session[5941]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:19.887000 audit[5941]: USER_END pid=5941 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.887000 audit[5941]: CRED_DISP pid=5941 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:19.890774 systemd[1]: sshd@10-10.200.20.22:22-10.200.16.10:36406.service: Deactivated successfully. Jan 23 17:34:19.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.22:22-10.200.16.10:36406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:19.893080 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:34:19.896046 systemd-logind[2051]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:34:19.897762 systemd-logind[2051]: Removed session 14. Jan 23 17:34:19.984738 systemd[1]: Started sshd@11-10.200.20.22:22-10.200.16.10:51198.service - OpenSSH per-connection server daemon (10.200.16.10:51198). Jan 23 17:34:19.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.22:22-10.200.16.10:51198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:20.405000 audit[5955]: USER_ACCT pid=5955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:20.406910 sshd[5955]: Accepted publickey for core from 10.200.16.10 port 51198 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:20.406000 audit[5955]: CRED_ACQ pid=5955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:20.406000 audit[5955]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd16fb320 a2=3 a3=0 items=0 ppid=1 pid=5955 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:20.406000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:20.407753 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:20.412071 systemd-logind[2051]: New session 15 of user core. Jan 23 17:34:20.418044 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:34:20.419000 audit[5955]: USER_START pid=5955 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:20.421000 audit[5959]: CRED_ACQ pid=5959 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:20.458291 containerd[2084]: time="2026-01-23T17:34:20.458123700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:34:20.698606 sshd[5959]: Connection closed by 10.200.16.10 port 51198 Jan 23 17:34:20.699826 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:20.700000 audit[5955]: USER_END pid=5955 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:20.700000 audit[5955]: CRED_DISP pid=5955 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:20.703838 systemd[1]: sshd@11-10.200.20.22:22-10.200.16.10:51198.service: Deactivated successfully. Jan 23 17:34:20.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.22:22-10.200.16.10:51198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:20.706764 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:34:20.710575 systemd-logind[2051]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:34:20.712496 systemd-logind[2051]: Removed session 15. Jan 23 17:34:20.861498 containerd[2084]: time="2026-01-23T17:34:20.861448320Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:20.864376 containerd[2084]: time="2026-01-23T17:34:20.864328685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:34:20.864603 containerd[2084]: time="2026-01-23T17:34:20.864409559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:20.864865 kubelet[3685]: E0123 17:34:20.864526 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:34:20.864865 kubelet[3685]: E0123 17:34:20.864676 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:34:20.864865 kubelet[3685]: E0123 17:34:20.864749 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-55dddbdf7b-97cf9_calico-system(e966f1d6-8ee3-4476-b957-9bae66b7553b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:20.864865 kubelet[3685]: E0123 17:34:20.864775 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:34:21.457425 containerd[2084]: time="2026-01-23T17:34:21.457379863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:34:21.733718 containerd[2084]: time="2026-01-23T17:34:21.733293341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:34:21.736132 containerd[2084]: time="2026-01-23T17:34:21.736089350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:34:21.736212 containerd[2084]: time="2026-01-23T17:34:21.736179498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 23 17:34:21.736454 kubelet[3685]: E0123 17:34:21.736396 3685 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:21.736500 kubelet[3685]: E0123 17:34:21.736458 3685 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:34:21.736544 kubelet[3685]: E0123 17:34:21.736522 3685 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-85fb74bcbb-xldvx_calico-apiserver(ce786634-1bb7-4148-9461-d169f302e50f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:34:21.736575 kubelet[3685]: E0123 17:34:21.736555 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:34:24.459906 kubelet[3685]: E0123 17:34:24.459680 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:34:24.459906 kubelet[3685]: E0123 17:34:24.459763 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:34:25.813883 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 23 17:34:25.814044 kernel: audit: type=1130 audit(1769189665.792:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.22:22-10.200.16.10:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:25.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.22:22-10.200.16.10:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:25.793126 systemd[1]: Started sshd@12-10.200.20.22:22-10.200.16.10:51202.service - OpenSSH per-connection server daemon (10.200.16.10:51202). Jan 23 17:34:26.250000 audit[5984]: USER_ACCT pid=5984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.256003 sshd[5984]: Accepted publickey for core from 10.200.16.10 port 51202 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:26.267691 sshd-session[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:26.265000 audit[5984]: CRED_ACQ pid=5984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.282591 kernel: audit: type=1101 audit(1769189666.250:812): pid=5984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.282727 kernel: audit: type=1103 audit(1769189666.265:813): pid=5984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.292630 kernel: audit: type=1006 audit(1769189666.265:814): pid=5984 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 23 17:34:26.265000 audit[5984]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc253c680 a2=3 a3=0 items=0 ppid=1 pid=5984 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:26.310607 kernel: audit: type=1300 audit(1769189666.265:814): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc253c680 a2=3 a3=0 items=0 ppid=1 pid=5984 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:26.265000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:26.318509 kernel: audit: type=1327 audit(1769189666.265:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:26.319445 systemd-logind[2051]: New session 16 of user core. Jan 23 17:34:26.322028 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:34:26.325000 audit[5984]: USER_START pid=5984 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.329000 audit[5988]: CRED_ACQ pid=5988 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.362235 kernel: audit: type=1105 audit(1769189666.325:815): pid=5984 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.362372 kernel: audit: type=1103 audit(1769189666.329:816): pid=5988 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.552104 sshd[5988]: Connection closed by 10.200.16.10 port 51202 Jan 23 17:34:26.552740 sshd-session[5984]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:26.554000 audit[5984]: USER_END pid=5984 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.558717 systemd[1]: sshd@12-10.200.20.22:22-10.200.16.10:51202.service: Deactivated successfully. Jan 23 17:34:26.561910 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:34:26.554000 audit[5984]: CRED_DISP pid=5984 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.588564 kernel: audit: type=1106 audit(1769189666.554:817): pid=5984 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.588708 kernel: audit: type=1104 audit(1769189666.554:818): pid=5984 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:26.588816 systemd-logind[2051]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:34:26.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.22:22-10.200.16.10:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:26.592471 systemd-logind[2051]: Removed session 16. Jan 23 17:34:27.457296 kubelet[3685]: E0123 17:34:27.457152 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:34:31.458139 kubelet[3685]: E0123 17:34:31.457051 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:34:31.458139 kubelet[3685]: E0123 17:34:31.457052 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:34:31.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.22:22-10.200.16.10:53054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:31.643228 systemd[1]: Started sshd@13-10.200.20.22:22-10.200.16.10:53054.service - OpenSSH per-connection server daemon (10.200.16.10:53054). Jan 23 17:34:31.646399 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:34:31.646582 kernel: audit: type=1130 audit(1769189671.642:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.22:22-10.200.16.10:53054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:32.084911 sshd[6016]: Accepted publickey for core from 10.200.16.10 port 53054 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:32.083000 audit[6016]: USER_ACCT pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.086227 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:32.084000 audit[6016]: CRED_ACQ pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.110592 systemd-logind[2051]: New session 17 of user core. Jan 23 17:34:32.119456 kernel: audit: type=1101 audit(1769189672.083:821): pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.119574 kernel: audit: type=1103 audit(1769189672.084:822): pid=6016 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.129502 kernel: audit: type=1006 audit(1769189672.084:823): pid=6016 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 23 17:34:32.084000 audit[6016]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3a2dd80 a2=3 a3=0 items=0 ppid=1 pid=6016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:32.146865 kernel: audit: type=1300 audit(1769189672.084:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3a2dd80 a2=3 a3=0 items=0 ppid=1 pid=6016 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:32.084000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:32.149152 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:34:32.155425 kernel: audit: type=1327 audit(1769189672.084:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:32.156000 audit[6016]: USER_START pid=6016 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.176597 kernel: audit: type=1105 audit(1769189672.156:824): pid=6016 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.159000 audit[6020]: CRED_ACQ pid=6020 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.191355 kernel: audit: type=1103 audit(1769189672.159:825): pid=6020 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.375963 sshd[6020]: Connection closed by 10.200.16.10 port 53054 Jan 23 17:34:32.379058 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:32.379000 audit[6016]: USER_END pid=6016 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.385822 systemd[1]: sshd@13-10.200.20.22:22-10.200.16.10:53054.service: Deactivated successfully. Jan 23 17:34:32.388753 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:34:32.392520 systemd-logind[2051]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:34:32.397438 systemd-logind[2051]: Removed session 17. Jan 23 17:34:32.379000 audit[6016]: CRED_DISP pid=6016 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.414221 kernel: audit: type=1106 audit(1769189672.379:826): pid=6016 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.414365 kernel: audit: type=1104 audit(1769189672.379:827): pid=6016 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:32.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.22:22-10.200.16.10:53054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:34.459507 kubelet[3685]: E0123 17:34:34.459446 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:34:37.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.22:22-10.200.16.10:53070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:37.465069 systemd[1]: Started sshd@14-10.200.20.22:22-10.200.16.10:53070.service - OpenSSH per-connection server daemon (10.200.16.10:53070). Jan 23 17:34:37.468467 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:34:37.468539 kernel: audit: type=1130 audit(1769189677.463:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.22:22-10.200.16.10:53070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:37.904000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:37.907458 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 53070 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:37.924115 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:37.921000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:37.939668 kernel: audit: type=1101 audit(1769189677.904:830): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:37.939829 kernel: audit: type=1103 audit(1769189677.921:831): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:37.944637 systemd-logind[2051]: New session 18 of user core. Jan 23 17:34:37.950796 kernel: audit: type=1006 audit(1769189677.921:832): pid=6032 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 23 17:34:37.921000 audit[6032]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff34e8210 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:37.969250 kernel: audit: type=1300 audit(1769189677.921:832): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff34e8210 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:37.971131 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:34:37.921000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:37.978031 kernel: audit: type=1327 audit(1769189677.921:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:37.975000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:37.998904 kernel: audit: type=1105 audit(1769189677.975:833): pid=6032 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:37.998000 audit[6036]: CRED_ACQ pid=6036 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.014678 kernel: audit: type=1103 audit(1769189677.998:834): pid=6036 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.226721 sshd[6036]: Connection closed by 10.200.16.10 port 53070 Jan 23 17:34:38.227471 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:38.228000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.233678 systemd[1]: sshd@14-10.200.20.22:22-10.200.16.10:53070.service: Deactivated successfully. Jan 23 17:34:38.237835 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:34:38.239458 systemd-logind[2051]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:34:38.241737 systemd-logind[2051]: Removed session 18. Jan 23 17:34:38.228000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.271500 kernel: audit: type=1106 audit(1769189678.228:835): pid=6032 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.271658 kernel: audit: type=1104 audit(1769189678.228:836): pid=6032 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.22:22-10.200.16.10:53070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:38.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.22:22-10.200.16.10:53080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:38.306133 systemd[1]: Started sshd@15-10.200.20.22:22-10.200.16.10:53080.service - OpenSSH per-connection server daemon (10.200.16.10:53080). Jan 23 17:34:38.706000 audit[6048]: USER_ACCT pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.708901 sshd[6048]: Accepted publickey for core from 10.200.16.10 port 53080 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:38.707000 audit[6048]: CRED_ACQ pid=6048 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.708000 audit[6048]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5624da0 a2=3 a3=0 items=0 ppid=1 pid=6048 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:38.708000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:38.710978 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:38.717700 systemd-logind[2051]: New session 19 of user core. Jan 23 17:34:38.722108 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 17:34:38.723000 audit[6048]: USER_START pid=6048 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:38.725000 audit[6052]: CRED_ACQ pid=6052 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:39.089924 sshd[6052]: Connection closed by 10.200.16.10 port 53080 Jan 23 17:34:39.090988 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:39.090000 audit[6048]: USER_END pid=6048 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:39.091000 audit[6048]: CRED_DISP pid=6048 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:39.095829 systemd-logind[2051]: Session 19 logged out. Waiting for processes to exit. Jan 23 17:34:39.096440 systemd[1]: sshd@15-10.200.20.22:22-10.200.16.10:53080.service: Deactivated successfully. Jan 23 17:34:39.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.22:22-10.200.16.10:53080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:39.098506 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 17:34:39.100346 systemd-logind[2051]: Removed session 19. Jan 23 17:34:39.186421 systemd[1]: Started sshd@16-10.200.20.22:22-10.200.16.10:53092.service - OpenSSH per-connection server daemon (10.200.16.10:53092). Jan 23 17:34:39.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.22:22-10.200.16.10:53092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:39.457778 kubelet[3685]: E0123 17:34:39.457702 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:34:39.458234 kubelet[3685]: E0123 17:34:39.458183 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:34:39.608000 audit[6062]: USER_ACCT pid=6062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:39.610624 sshd[6062]: Accepted publickey for core from 10.200.16.10 port 53092 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:39.609000 audit[6062]: CRED_ACQ pid=6062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:39.610000 audit[6062]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca760650 a2=3 a3=0 items=0 ppid=1 pid=6062 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:39.610000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:39.612519 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:39.617079 systemd-logind[2051]: New session 20 of user core. Jan 23 17:34:39.625190 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 17:34:39.628000 audit[6062]: USER_START pid=6062 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:39.629000 audit[6066]: CRED_ACQ pid=6066 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:40.389000 audit[6081]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=6081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:34:40.389000 audit[6081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffeaca7370 a2=0 a3=1 items=0 ppid=3791 pid=6081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:40.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:34:40.393000 audit[6081]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=6081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:34:40.393000 audit[6081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffeaca7370 a2=0 a3=1 items=0 ppid=3791 pid=6081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:40.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:34:40.476319 sshd[6066]: Connection closed by 10.200.16.10 port 53092 Jan 23 17:34:40.477065 sshd-session[6062]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:40.479000 audit[6062]: USER_END pid=6062 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:40.479000 audit[6062]: CRED_DISP pid=6062 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:40.484245 systemd[1]: sshd@16-10.200.20.22:22-10.200.16.10:53092.service: Deactivated successfully. Jan 23 17:34:40.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.22:22-10.200.16.10:53092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:40.487584 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 17:34:40.488686 systemd-logind[2051]: Session 20 logged out. Waiting for processes to exit. Jan 23 17:34:40.492046 systemd-logind[2051]: Removed session 20. Jan 23 17:34:40.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.22:22-10.200.16.10:51700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:40.570103 systemd[1]: Started sshd@17-10.200.20.22:22-10.200.16.10:51700.service - OpenSSH per-connection server daemon (10.200.16.10:51700). Jan 23 17:34:40.996000 audit[6088]: USER_ACCT pid=6088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:40.997719 sshd[6088]: Accepted publickey for core from 10.200.16.10 port 51700 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:40.997000 audit[6088]: CRED_ACQ pid=6088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:40.997000 audit[6088]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0c87d50 a2=3 a3=0 items=0 ppid=1 pid=6088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:40.997000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:40.999341 sshd-session[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:41.003582 systemd-logind[2051]: New session 21 of user core. Jan 23 17:34:41.009051 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 17:34:41.011000 audit[6088]: USER_START pid=6088 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:41.012000 audit[6092]: CRED_ACQ pid=6092 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:41.369568 sshd[6092]: Connection closed by 10.200.16.10 port 51700 Jan 23 17:34:41.371005 sshd-session[6088]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:41.372000 audit[6088]: USER_END pid=6088 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:41.372000 audit[6088]: CRED_DISP pid=6088 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:41.375667 systemd[1]: sshd@17-10.200.20.22:22-10.200.16.10:51700.service: Deactivated successfully. Jan 23 17:34:41.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.22:22-10.200.16.10:51700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:41.379951 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 17:34:41.382819 systemd-logind[2051]: Session 21 logged out. Waiting for processes to exit. Jan 23 17:34:41.384599 systemd-logind[2051]: Removed session 21. Jan 23 17:34:41.413000 audit[6101]: NETFILTER_CFG table=filter:137 family=2 entries=38 op=nft_register_rule pid=6101 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:34:41.413000 audit[6101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd7249cf0 a2=0 a3=1 items=0 ppid=3791 pid=6101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:41.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:34:41.416000 audit[6101]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=6101 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:34:41.416000 audit[6101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd7249cf0 a2=0 a3=1 items=0 ppid=3791 pid=6101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:41.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:34:41.459410 kubelet[3685]: E0123 17:34:41.459361 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:34:41.468283 systemd[1]: Started sshd@18-10.200.20.22:22-10.200.16.10:51704.service - OpenSSH per-connection server daemon (10.200.16.10:51704). Jan 23 17:34:41.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.22:22-10.200.16.10:51704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:41.905000 audit[6103]: USER_ACCT pid=6103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:41.906327 sshd[6103]: Accepted publickey for core from 10.200.16.10 port 51704 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:41.907000 audit[6103]: CRED_ACQ pid=6103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:41.908000 audit[6103]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0deeb50 a2=3 a3=0 items=0 ppid=1 pid=6103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:41.908000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:41.909618 sshd-session[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:41.916781 systemd-logind[2051]: New session 22 of user core. Jan 23 17:34:41.923320 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 17:34:41.928000 audit[6103]: USER_START pid=6103 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:41.930000 audit[6107]: CRED_ACQ pid=6107 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:42.212605 sshd[6107]: Connection closed by 10.200.16.10 port 51704 Jan 23 17:34:42.214111 sshd-session[6103]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:42.214000 audit[6103]: USER_END pid=6103 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:42.214000 audit[6103]: CRED_DISP pid=6103 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:42.217947 systemd[1]: sshd@18-10.200.20.22:22-10.200.16.10:51704.service: Deactivated successfully. Jan 23 17:34:42.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.22:22-10.200.16.10:51704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:42.220699 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 17:34:42.222394 systemd-logind[2051]: Session 22 logged out. Waiting for processes to exit. Jan 23 17:34:42.224101 systemd-logind[2051]: Removed session 22. Jan 23 17:34:42.458922 kubelet[3685]: E0123 17:34:42.458385 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:34:46.372000 audit[6145]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=6145 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:34:46.376865 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 23 17:34:46.376964 kernel: audit: type=1325 audit(1769189686.372:878): table=filter:139 family=2 entries=26 op=nft_register_rule pid=6145 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:34:46.405260 kernel: audit: type=1300 audit(1769189686.372:878): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcc0cc510 a2=0 a3=1 items=0 ppid=3791 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:46.372000 audit[6145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcc0cc510 a2=0 a3=1 items=0 ppid=3791 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:46.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:34:46.417593 kernel: audit: type=1327 audit(1769189686.372:878): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:34:46.407000 audit[6145]: NETFILTER_CFG table=nat:140 family=2 entries=104 op=nft_register_chain pid=6145 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:34:46.407000 audit[6145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffcc0cc510 a2=0 a3=1 items=0 ppid=3791 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:46.427961 kernel: audit: type=1325 audit(1769189686.407:879): table=nat:140 family=2 entries=104 op=nft_register_chain pid=6145 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 23 17:34:46.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:34:46.458141 kernel: audit: type=1300 audit(1769189686.407:879): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffcc0cc510 a2=0 a3=1 items=0 ppid=3791 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:46.458237 kernel: audit: type=1327 audit(1769189686.407:879): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 23 17:34:46.460121 kubelet[3685]: E0123 17:34:46.460068 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:34:46.460121 kubelet[3685]: E0123 17:34:46.460068 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:34:47.314304 systemd[1]: Started sshd@19-10.200.20.22:22-10.200.16.10:51708.service - OpenSSH per-connection server daemon (10.200.16.10:51708). Jan 23 17:34:47.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.22:22-10.200.16.10:51708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:47.329921 kernel: audit: type=1130 audit(1769189687.313:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.22:22-10.200.16.10:51708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:47.747000 audit[6147]: USER_ACCT pid=6147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:47.748355 sshd[6147]: Accepted publickey for core from 10.200.16.10 port 51708 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:47.764946 kernel: audit: type=1101 audit(1769189687.747:881): pid=6147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:47.764000 audit[6147]: CRED_ACQ pid=6147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:47.766180 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:47.788868 kernel: audit: type=1103 audit(1769189687.764:882): pid=6147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:47.788971 kernel: audit: type=1006 audit(1769189687.764:883): pid=6147 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 23 17:34:47.764000 audit[6147]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf4b1270 a2=3 a3=0 items=0 ppid=1 pid=6147 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:47.764000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:47.798219 systemd-logind[2051]: New session 23 of user core. Jan 23 17:34:47.805040 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 17:34:47.808000 audit[6147]: USER_START pid=6147 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:47.810000 audit[6151]: CRED_ACQ pid=6151 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:48.060234 sshd[6151]: Connection closed by 10.200.16.10 port 51708 Jan 23 17:34:48.060778 sshd-session[6147]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:48.062000 audit[6147]: USER_END pid=6147 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:48.062000 audit[6147]: CRED_DISP pid=6147 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:48.066656 systemd[1]: sshd@19-10.200.20.22:22-10.200.16.10:51708.service: Deactivated successfully. Jan 23 17:34:48.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.22:22-10.200.16.10:51708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:48.070738 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 17:34:48.073554 systemd-logind[2051]: Session 23 logged out. Waiting for processes to exit. Jan 23 17:34:48.074795 systemd-logind[2051]: Removed session 23. Jan 23 17:34:51.457464 kubelet[3685]: E0123 17:34:51.457366 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:34:51.458096 kubelet[3685]: E0123 17:34:51.457985 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:34:52.458003 kubelet[3685]: E0123 17:34:52.457887 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:34:53.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.22:22-10.200.16.10:46292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:53.149297 systemd[1]: Started sshd@20-10.200.20.22:22-10.200.16.10:46292.service - OpenSSH per-connection server daemon (10.200.16.10:46292). Jan 23 17:34:53.152547 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 23 17:34:53.152685 kernel: audit: type=1130 audit(1769189693.148:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.22:22-10.200.16.10:46292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:53.588000 audit[6164]: USER_ACCT pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.589990 sshd[6164]: Accepted publickey for core from 10.200.16.10 port 46292 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:53.605438 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:53.603000 audit[6164]: CRED_ACQ pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.619969 kernel: audit: type=1101 audit(1769189693.588:890): pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.620082 kernel: audit: type=1103 audit(1769189693.603:891): pid=6164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.629490 kernel: audit: type=1006 audit(1769189693.603:892): pid=6164 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 23 17:34:53.603000 audit[6164]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff755b060 a2=3 a3=0 items=0 ppid=1 pid=6164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:53.645817 kernel: audit: type=1300 audit(1769189693.603:892): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff755b060 a2=3 a3=0 items=0 ppid=1 pid=6164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:53.603000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:53.653563 kernel: audit: type=1327 audit(1769189693.603:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:53.656186 systemd-logind[2051]: New session 24 of user core. Jan 23 17:34:53.661045 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 17:34:53.663000 audit[6164]: USER_START pid=6164 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.682000 audit[6168]: CRED_ACQ pid=6168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.685970 kernel: audit: type=1105 audit(1769189693.663:893): pid=6164 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.702874 kernel: audit: type=1103 audit(1769189693.682:894): pid=6168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.892597 sshd[6168]: Connection closed by 10.200.16.10 port 46292 Jan 23 17:34:53.892509 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:53.894000 audit[6164]: USER_END pid=6164 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.898917 systemd[1]: sshd@20-10.200.20.22:22-10.200.16.10:46292.service: Deactivated successfully. Jan 23 17:34:53.903587 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 17:34:53.895000 audit[6164]: CRED_DISP pid=6164 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.917888 kernel: audit: type=1106 audit(1769189693.894:895): pid=6164 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.917695 systemd-logind[2051]: Session 24 logged out. Waiting for processes to exit. Jan 23 17:34:53.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.22:22-10.200.16.10:46292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:53.933870 kernel: audit: type=1104 audit(1769189693.895:896): pid=6164 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:53.934065 systemd-logind[2051]: Removed session 24. Jan 23 17:34:56.457466 kubelet[3685]: E0123 17:34:56.457255 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:34:58.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.22:22-10.200.16.10:46296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:58.986122 systemd[1]: Started sshd@21-10.200.20.22:22-10.200.16.10:46296.service - OpenSSH per-connection server daemon (10.200.16.10:46296). Jan 23 17:34:58.989870 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:34:58.989957 kernel: audit: type=1130 audit(1769189698.985:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.22:22-10.200.16.10:46296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:34:59.424979 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 46296 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:34:59.423000 audit[6180]: USER_ACCT pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.442914 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:34:59.441000 audit[6180]: CRED_ACQ pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.454512 systemd-logind[2051]: New session 25 of user core. Jan 23 17:34:59.462322 kernel: audit: type=1101 audit(1769189699.423:899): pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.462434 kernel: audit: type=1103 audit(1769189699.441:900): pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.466883 kubelet[3685]: E0123 17:34:59.466024 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f" Jan 23 17:34:59.472162 kernel: audit: type=1006 audit(1769189699.441:901): pid=6180 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 23 17:34:59.441000 audit[6180]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce9b9130 a2=3 a3=0 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:59.489901 kernel: audit: type=1300 audit(1769189699.441:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce9b9130 a2=3 a3=0 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:34:59.441000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:59.492133 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 17:34:59.497644 kernel: audit: type=1327 audit(1769189699.441:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:34:59.498000 audit[6180]: USER_START pid=6180 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.520000 audit[6184]: CRED_ACQ pid=6184 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.534839 kernel: audit: type=1105 audit(1769189699.498:902): pid=6180 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.534981 kernel: audit: type=1103 audit(1769189699.520:903): pid=6184 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.733891 sshd[6184]: Connection closed by 10.200.16.10 port 46296 Jan 23 17:34:59.735082 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Jan 23 17:34:59.735000 audit[6180]: USER_END pid=6180 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.744830 systemd[1]: sshd@21-10.200.20.22:22-10.200.16.10:46296.service: Deactivated successfully. Jan 23 17:34:59.747815 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 17:34:59.749575 systemd-logind[2051]: Session 25 logged out. Waiting for processes to exit. Jan 23 17:34:59.751253 systemd-logind[2051]: Removed session 25. Jan 23 17:34:59.735000 audit[6180]: CRED_DISP pid=6180 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.769265 kernel: audit: type=1106 audit(1769189699.735:904): pid=6180 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.769394 kernel: audit: type=1104 audit(1769189699.735:905): pid=6180 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:34:59.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.22:22-10.200.16.10:46296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:01.456975 kubelet[3685]: E0123 17:35:01.456811 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-55dddbdf7b-97cf9" podUID="e966f1d6-8ee3-4476-b957-9bae66b7553b" Jan 23 17:35:03.456863 kubelet[3685]: E0123 17:35:03.456748 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-d6mpj" podUID="29070044-7a78-4c22-ba4e-b03de4973ab6" Jan 23 17:35:03.457519 kubelet[3685]: E0123 17:35:03.457202 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8t56" podUID="bea6d6d6-6443-4534-ac1b-26cecad019a7" Jan 23 17:35:04.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.22:22-10.200.16.10:43374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:04.828152 systemd[1]: Started sshd@22-10.200.20.22:22-10.200.16.10:43374.service - OpenSSH per-connection server daemon (10.200.16.10:43374). Jan 23 17:35:04.832353 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:35:04.832482 kernel: audit: type=1130 audit(1769189704.827:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.22:22-10.200.16.10:43374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:05.267000 audit[6198]: USER_ACCT pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.271022 sshd[6198]: Accepted publickey for core from 10.200.16.10 port 43374 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:35:05.284365 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:05.282000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.301430 kernel: audit: type=1101 audit(1769189705.267:908): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.301626 kernel: audit: type=1103 audit(1769189705.282:909): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.307447 systemd-logind[2051]: New session 26 of user core. Jan 23 17:35:05.311290 kernel: audit: type=1006 audit(1769189705.282:910): pid=6198 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 23 17:35:05.282000 audit[6198]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffeb0eac0 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:05.328657 kernel: audit: type=1300 audit(1769189705.282:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffeb0eac0 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:05.282000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:05.332137 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 17:35:05.336546 kernel: audit: type=1327 audit(1769189705.282:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:05.337000 audit[6198]: USER_START pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.359000 audit[6202]: CRED_ACQ pid=6202 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.375910 kernel: audit: type=1105 audit(1769189705.337:911): pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.376053 kernel: audit: type=1103 audit(1769189705.359:912): pid=6202 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.460211 kubelet[3685]: E0123 17:35:05.460153 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-844d8cf486-rp7bn" podUID="c0c6a4bb-e851-48c7-afc7-8d5b88a4086b" Jan 23 17:35:05.600112 sshd[6202]: Connection closed by 10.200.16.10 port 43374 Jan 23 17:35:05.603186 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:05.604000 audit[6198]: USER_END pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.610227 systemd[1]: sshd@22-10.200.20.22:22-10.200.16.10:43374.service: Deactivated successfully. Jan 23 17:35:05.615032 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 17:35:05.623490 systemd-logind[2051]: Session 26 logged out. Waiting for processes to exit. Jan 23 17:35:05.625793 systemd-logind[2051]: Removed session 26. Jan 23 17:35:05.605000 audit[6198]: CRED_DISP pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.642123 kernel: audit: type=1106 audit(1769189705.604:913): pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.642293 kernel: audit: type=1104 audit(1769189705.605:914): pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:05.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.22:22-10.200.16.10:43374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:09.456402 kubelet[3685]: E0123 17:35:09.456353 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-kwnqz" podUID="3673ff07-a128-4686-9fb6-6fd2ab66f4db" Jan 23 17:35:10.692000 systemd[1]: Started sshd@23-10.200.20.22:22-10.200.16.10:49848.service - OpenSSH per-connection server daemon (10.200.16.10:49848). Jan 23 17:35:10.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.22:22-10.200.16.10:49848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:10.695686 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 23 17:35:10.695824 kernel: audit: type=1130 audit(1769189710.690:916): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.22:22-10.200.16.10:49848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:11.150000 audit[6217]: USER_ACCT pid=6217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.169297 sshd[6217]: Accepted publickey for core from 10.200.16.10 port 49848 ssh2: RSA SHA256:sWEJExtxDe4gs9/o2tjOo6Ll3IuN1XzCX6NXuvrOaPA Jan 23 17:35:11.170925 sshd-session[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:35:11.168000 audit[6217]: CRED_ACQ pid=6217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.185617 kernel: audit: type=1101 audit(1769189711.150:917): pid=6217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.185757 kernel: audit: type=1103 audit(1769189711.168:918): pid=6217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.194883 kernel: audit: type=1006 audit(1769189711.168:919): pid=6217 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 23 17:35:11.168000 audit[6217]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0fa28a0 a2=3 a3=0 items=0 ppid=1 pid=6217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:11.212285 kernel: audit: type=1300 audit(1769189711.168:919): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0fa28a0 a2=3 a3=0 items=0 ppid=1 pid=6217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 23 17:35:11.168000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:11.219446 kernel: audit: type=1327 audit(1769189711.168:919): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 23 17:35:11.223684 systemd-logind[2051]: New session 27 of user core. Jan 23 17:35:11.229196 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 17:35:11.256000 audit[6217]: USER_START pid=6217 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.277000 audit[6221]: CRED_ACQ pid=6221 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.292065 kernel: audit: type=1105 audit(1769189711.256:920): pid=6217 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.292213 kernel: audit: type=1103 audit(1769189711.277:921): pid=6221 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.489626 sshd[6221]: Connection closed by 10.200.16.10 port 49848 Jan 23 17:35:11.490181 sshd-session[6217]: pam_unix(sshd:session): session closed for user core Jan 23 17:35:11.490000 audit[6217]: USER_END pid=6217 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.512834 systemd[1]: sshd@23-10.200.20.22:22-10.200.16.10:49848.service: Deactivated successfully. Jan 23 17:35:11.515654 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 17:35:11.518596 systemd-logind[2051]: Session 27 logged out. Waiting for processes to exit. Jan 23 17:35:11.490000 audit[6217]: CRED_DISP pid=6217 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.535015 kernel: audit: type=1106 audit(1769189711.490:922): pid=6217 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.535149 kernel: audit: type=1104 audit(1769189711.490:923): pid=6217 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 23 17:35:11.538171 systemd-logind[2051]: Removed session 27. Jan 23 17:35:11.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.22:22-10.200.16.10:49848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 23 17:35:13.459265 kubelet[3685]: E0123 17:35:13.458372 3685 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-85fb74bcbb-xldvx" podUID="ce786634-1bb7-4148-9461-d169f302e50f"