Jan 28 00:22:32.179161 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 28 00:22:32.179178 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 27 22:28:28 -00 2026 Jan 28 00:22:32.179184 kernel: KASLR enabled Jan 28 00:22:32.179189 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 00:22:32.179193 kernel: printk: legacy bootconsole [pl11] enabled Jan 28 00:22:32.179197 kernel: efi: EFI v2.7 by EDK II Jan 28 00:22:32.179203 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 28 00:22:32.179207 kernel: random: crng init done Jan 28 00:22:32.179211 kernel: secureboot: Secure boot disabled Jan 28 00:22:32.179215 kernel: ACPI: Early table checksum verification disabled Jan 28 00:22:32.179219 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 28 00:22:32.179223 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179227 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179232 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 00:22:32.179238 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179242 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179247 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179252 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179257 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179261 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179265 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 00:22:32.179270 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:22:32.179274 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 00:22:32.179278 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 28 00:22:32.179283 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 28 00:22:32.179287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 28 00:22:32.179292 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 28 00:22:32.179297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 28 00:22:32.179301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 28 00:22:32.179306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 28 00:22:32.179310 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 28 00:22:32.179315 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 28 00:22:32.179319 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 28 00:22:32.179323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 28 00:22:32.179328 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 28 00:22:32.179332 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 28 00:22:32.179337 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 28 00:22:32.179341 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 28 00:22:32.179346 kernel: Zone ranges: Jan 28 00:22:32.179351 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 00:22:32.179357 kernel: DMA32 empty Jan 28 00:22:32.179362 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 00:22:32.179367 kernel: Device empty Jan 28 00:22:32.179372 kernel: Movable zone start for each node Jan 28 00:22:32.179377 kernel: Early memory node ranges Jan 28 00:22:32.179381 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 00:22:32.179386 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 28 00:22:32.179391 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 28 00:22:32.179395 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 28 00:22:32.179400 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 28 00:22:32.179404 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 28 00:22:32.179409 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 00:22:32.179415 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 00:22:32.179419 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 00:22:32.179424 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 28 00:22:32.179429 kernel: psci: probing for conduit method from ACPI. Jan 28 00:22:32.179433 kernel: psci: PSCIv1.3 detected in firmware. Jan 28 00:22:32.179438 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 00:22:32.179443 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 00:22:32.179447 kernel: psci: SMC Calling Convention v1.4 Jan 28 00:22:32.179452 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 00:22:32.179457 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 00:22:32.179461 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 28 00:22:32.179466 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 28 00:22:32.179472 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 00:22:32.179476 kernel: Detected PIPT I-cache on CPU0 Jan 28 00:22:32.179481 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 28 00:22:32.179486 kernel: CPU features: detected: GIC system register CPU interface Jan 28 00:22:32.179490 kernel: CPU features: detected: Spectre-v4 Jan 28 00:22:32.179495 kernel: CPU features: detected: Spectre-BHB Jan 28 00:22:32.179500 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 00:22:32.179504 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 00:22:32.179509 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 28 00:22:32.179514 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 00:22:32.179519 kernel: alternatives: applying boot alternatives Jan 28 00:22:32.179525 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ffbbb1f2dd4f19dd875b6fa16303680c6bcd968d1e90ec98053307c162b9a8d1 Jan 28 00:22:32.179530 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:22:32.179535 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:22:32.179539 kernel: Fallback order for Node 0: 0 Jan 28 00:22:32.179544 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 28 00:22:32.179549 kernel: Policy zone: Normal Jan 28 00:22:32.179553 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:22:32.179558 kernel: software IO TLB: area num 2. Jan 28 00:22:32.179563 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Jan 28 00:22:32.179567 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 00:22:32.179573 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:22:32.179578 kernel: rcu: RCU event tracing is enabled. Jan 28 00:22:32.179583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 00:22:32.179587 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:22:32.179592 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:22:32.179597 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:22:32.179602 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 00:22:32.179606 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:22:32.179611 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:22:32.179616 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 00:22:32.179620 kernel: GICv3: 960 SPIs implemented Jan 28 00:22:32.179626 kernel: GICv3: 0 Extended SPIs implemented Jan 28 00:22:32.179630 kernel: Root IRQ handler: gic_handle_irq Jan 28 00:22:32.179635 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 00:22:32.179640 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 28 00:22:32.179644 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 00:22:32.179649 kernel: ITS: No ITS available, not enabling LPIs Jan 28 00:22:32.179654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:22:32.179659 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 28 00:22:32.179663 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:22:32.179668 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 28 00:22:32.179673 kernel: Console: colour dummy device 80x25 Jan 28 00:22:32.179679 kernel: printk: legacy console [tty1] enabled Jan 28 00:22:32.179684 kernel: ACPI: Core revision 20240827 Jan 28 00:22:32.179689 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 28 00:22:32.179694 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:22:32.179699 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 00:22:32.179704 kernel: landlock: Up and running. Jan 28 00:22:32.179709 kernel: SELinux: Initializing. Jan 28 00:22:32.179715 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:22:32.179720 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:22:32.179725 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 28 00:22:32.179730 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 28 00:22:32.179738 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 00:22:32.179744 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:22:32.179749 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:22:32.179754 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 00:22:32.179759 kernel: Remapping and enabling EFI services. Jan 28 00:22:32.179765 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:22:32.179770 kernel: Detected PIPT I-cache on CPU1 Jan 28 00:22:32.179775 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 00:22:32.179780 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 28 00:22:32.179786 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 00:22:32.179792 kernel: SMP: Total of 2 processors activated. Jan 28 00:22:32.179797 kernel: CPU: All CPU(s) started at EL1 Jan 28 00:22:32.179802 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 00:22:32.179807 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 00:22:32.179813 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 00:22:32.179829 kernel: CPU features: detected: Common not Private translations Jan 28 00:22:32.179836 kernel: CPU features: detected: CRC32 instructions Jan 28 00:22:32.179841 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 28 00:22:32.179846 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 00:22:32.179851 kernel: CPU features: detected: LSE atomic instructions Jan 28 00:22:32.179856 kernel: CPU features: detected: Privileged Access Never Jan 28 00:22:32.179861 kernel: CPU features: detected: Speculation barrier (SB) Jan 28 00:22:32.179867 kernel: CPU features: detected: TLB range maintenance instructions Jan 28 00:22:32.179873 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 28 00:22:32.179878 kernel: CPU features: detected: Scalable Vector Extension Jan 28 00:22:32.179883 kernel: alternatives: applying system-wide alternatives Jan 28 00:22:32.179888 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 28 00:22:32.179893 kernel: SVE: maximum available vector length 16 bytes per vector Jan 28 00:22:32.179899 kernel: SVE: default vector length 16 bytes per vector Jan 28 00:22:32.179904 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Jan 28 00:22:32.179910 kernel: devtmpfs: initialized Jan 28 00:22:32.179915 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:22:32.179921 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 00:22:32.179926 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 00:22:32.179931 kernel: 0 pages in range for non-PLT usage Jan 28 00:22:32.179936 kernel: 515168 pages in range for PLT usage Jan 28 00:22:32.179941 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:22:32.179947 kernel: SMBIOS 3.1.0 present. Jan 28 00:22:32.179952 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 28 00:22:32.179958 kernel: DMI: Memory slots populated: 2/2 Jan 28 00:22:32.179963 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:22:32.179968 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 00:22:32.179973 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 00:22:32.179978 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 00:22:32.179984 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:22:32.179990 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 28 00:22:32.179995 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:22:32.180000 kernel: cpuidle: using governor menu Jan 28 00:22:32.180005 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 00:22:32.180010 kernel: ASID allocator initialised with 32768 entries Jan 28 00:22:32.180015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:22:32.180020 kernel: Serial: AMBA PL011 UART driver Jan 28 00:22:32.180026 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:22:32.180031 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:22:32.180036 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 00:22:32.180041 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 00:22:32.180047 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:22:32.180052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:22:32.180057 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 00:22:32.180063 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 00:22:32.180068 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:22:32.180073 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:22:32.180078 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:22:32.180083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:22:32.180088 kernel: ACPI: Interpreter enabled Jan 28 00:22:32.180094 kernel: ACPI: Using GIC for interrupt routing Jan 28 00:22:32.180100 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 00:22:32.180105 kernel: printk: legacy console [ttyAMA0] enabled Jan 28 00:22:32.180110 kernel: printk: legacy bootconsole [pl11] disabled Jan 28 00:22:32.180115 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 00:22:32.180120 kernel: ACPI: CPU0 has been hot-added Jan 28 00:22:32.180125 kernel: ACPI: CPU1 has been hot-added Jan 28 00:22:32.180130 kernel: iommu: Default domain type: Translated Jan 28 00:22:32.180136 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 00:22:32.180142 kernel: efivars: Registered efivars operations Jan 28 00:22:32.180147 kernel: vgaarb: loaded Jan 28 00:22:32.180152 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 00:22:32.180157 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:22:32.180162 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:22:32.180167 kernel: pnp: PnP ACPI init Jan 28 00:22:32.180173 kernel: pnp: PnP ACPI: found 0 devices Jan 28 00:22:32.180178 kernel: NET: Registered PF_INET protocol family Jan 28 00:22:32.180183 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:22:32.180188 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:22:32.180194 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:22:32.180199 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:22:32.180204 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:22:32.180210 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:22:32.180215 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:22:32.180220 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:22:32.180226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:22:32.180231 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:22:32.180236 kernel: kvm [1]: HYP mode not available Jan 28 00:22:32.180241 kernel: Initialise system trusted keyrings Jan 28 00:22:32.180246 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:22:32.180252 kernel: Key type asymmetric registered Jan 28 00:22:32.180257 kernel: Asymmetric key parser 'x509' registered Jan 28 00:22:32.180262 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 28 00:22:32.180267 kernel: io scheduler mq-deadline registered Jan 28 00:22:32.180272 kernel: io scheduler kyber registered Jan 28 00:22:32.180278 kernel: io scheduler bfq registered Jan 28 00:22:32.180283 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:22:32.180289 kernel: thunder_xcv, ver 1.0 Jan 28 00:22:32.180294 kernel: thunder_bgx, ver 1.0 Jan 28 00:22:32.180299 kernel: nicpf, ver 1.0 Jan 28 00:22:32.180304 kernel: nicvf, ver 1.0 Jan 28 00:22:32.180431 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 00:22:32.180503 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T00:22:30 UTC (1769559750) Jan 28 00:22:32.180511 kernel: efifb: probing for efifb Jan 28 00:22:32.180517 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 00:22:32.180522 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 00:22:32.180527 kernel: efifb: scrolling: redraw Jan 28 00:22:32.180532 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 00:22:32.180538 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 00:22:32.180543 kernel: fb0: EFI VGA frame buffer device Jan 28 00:22:32.180549 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 00:22:32.180554 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 00:22:32.180559 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 28 00:22:32.180565 kernel: watchdog: NMI not fully supported Jan 28 00:22:32.180570 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:22:32.180575 kernel: watchdog: Hard watchdog permanently disabled Jan 28 00:22:32.180580 kernel: Segment Routing with IPv6 Jan 28 00:22:32.180586 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:22:32.180591 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:22:32.180596 kernel: Key type dns_resolver registered Jan 28 00:22:32.180601 kernel: registered taskstats version 1 Jan 28 00:22:32.180607 kernel: Loading compiled-in X.509 certificates Jan 28 00:22:32.180612 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ec5486df2a249129ee021351ee74a3a3fab70361' Jan 28 00:22:32.180617 kernel: Demotion targets for Node 0: null Jan 28 00:22:32.180623 kernel: Key type .fscrypt registered Jan 28 00:22:32.180628 kernel: Key type fscrypt-provisioning registered Jan 28 00:22:32.180633 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:22:32.180638 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:22:32.180644 kernel: ima: No architecture policies found Jan 28 00:22:32.180649 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 00:22:32.180654 kernel: clk: Disabling unused clocks Jan 28 00:22:32.180659 kernel: PM: genpd: Disabling unused power domains Jan 28 00:22:32.180665 kernel: Freeing unused kernel memory: 12480K Jan 28 00:22:32.180670 kernel: Run /init as init process Jan 28 00:22:32.180675 kernel: with arguments: Jan 28 00:22:32.180680 kernel: /init Jan 28 00:22:32.180685 kernel: with environment: Jan 28 00:22:32.180693 kernel: HOME=/ Jan 28 00:22:32.180698 kernel: TERM=linux Jan 28 00:22:32.180704 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 00:22:32.180709 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 00:22:32.180714 kernel: SCSI subsystem initialized Jan 28 00:22:32.180720 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 28 00:22:32.180805 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 00:22:32.180812 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 00:22:32.182852 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 28 00:22:32.182861 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 00:22:32.182867 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 00:22:32.182873 kernel: PTP clock support registered Jan 28 00:22:32.182878 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 00:22:32.182883 kernel: hv_vmbus: registering driver hv_utils Jan 28 00:22:32.182889 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 00:22:32.182896 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 00:22:32.182901 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 00:22:32.182906 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 00:22:32.183044 kernel: scsi host0: storvsc_host_t Jan 28 00:22:32.183128 kernel: scsi host1: storvsc_host_t Jan 28 00:22:32.183217 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 00:22:32.183303 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 00:22:32.183377 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 00:22:32.183450 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 00:22:32.183523 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 00:22:32.183595 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 00:22:32.183667 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 00:22:32.183748 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:22:32.183827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:22:32.183834 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:22:32.183908 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 00:22:32.183915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:22:32.183921 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:22:32.183927 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 00:22:32.183999 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 00:22:32.184006 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:22:32.184011 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 28 00:22:32.184082 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 00:22:32.184088 kernel: raid6: neonx8 gen() 18549 MB/s Jan 28 00:22:32.184094 kernel: raid6: neonx4 gen() 18569 MB/s Jan 28 00:22:32.184100 kernel: raid6: neonx2 gen() 17084 MB/s Jan 28 00:22:32.184105 kernel: raid6: neonx1 gen() 15061 MB/s Jan 28 00:22:32.184110 kernel: raid6: int64x8 gen() 10542 MB/s Jan 28 00:22:32.184115 kernel: raid6: int64x4 gen() 10614 MB/s Jan 28 00:22:32.184120 kernel: raid6: int64x2 gen() 8989 MB/s Jan 28 00:22:32.184126 kernel: raid6: int64x1 gen() 7020 MB/s Jan 28 00:22:32.184131 kernel: raid6: using algorithm neonx4 gen() 18569 MB/s Jan 28 00:22:32.184137 kernel: raid6: .... xor() 15148 MB/s, rmw enabled Jan 28 00:22:32.184143 kernel: raid6: using neon recovery algorithm Jan 28 00:22:32.184148 kernel: xor: measuring software checksum speed Jan 28 00:22:32.184153 kernel: 8regs : 28627 MB/sec Jan 28 00:22:32.184158 kernel: 32regs : 28814 MB/sec Jan 28 00:22:32.184163 kernel: arm64_neon : 37354 MB/sec Jan 28 00:22:32.184168 kernel: xor: using function: arm64_neon (37354 MB/sec) Jan 28 00:22:32.184174 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:22:32.184180 kernel: BTRFS: device fsid f330caf6-2291-456b-9b1c-a7a0df870f93 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (444) Jan 28 00:22:32.184185 kernel: BTRFS info (device dm-0): first mount of filesystem f330caf6-2291-456b-9b1c-a7a0df870f93 Jan 28 00:22:32.184190 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:22:32.184196 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:22:32.184201 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 00:22:32.184206 kernel: loop: module loaded Jan 28 00:22:32.184212 kernel: loop0: detected capacity change from 0 to 91840 Jan 28 00:22:32.184217 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:22:32.184223 systemd[1]: Successfully made /usr/ read-only. Jan 28 00:22:32.184231 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:22:32.184237 systemd[1]: Detected virtualization microsoft. Jan 28 00:22:32.184242 systemd[1]: Detected architecture arm64. Jan 28 00:22:32.184249 systemd[1]: Running in initrd. Jan 28 00:22:32.184254 systemd[1]: No hostname configured, using default hostname. Jan 28 00:22:32.184260 systemd[1]: Hostname set to . Jan 28 00:22:32.184265 systemd[1]: Initializing machine ID from random generator. Jan 28 00:22:32.184271 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:22:32.184277 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:22:32.184283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:22:32.184289 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:22:32.184295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:22:32.184301 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:22:32.184307 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:22:32.184313 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:22:32.184319 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:22:32.184325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:22:32.184331 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:22:32.184336 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:22:32.184342 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:22:32.184347 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:22:32.184353 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:22:32.184359 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:22:32.184365 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:22:32.184370 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 00:22:32.184376 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:22:32.184381 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 00:22:32.184387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:22:32.184397 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:22:32.184404 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:22:32.184410 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:22:32.184415 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:22:32.184421 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:22:32.184428 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:22:32.184433 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:22:32.184440 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 00:22:32.184445 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:22:32.184451 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:22:32.184457 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:22:32.184476 systemd-journald[581]: Collecting audit messages is enabled. Jan 28 00:22:32.184490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:22:32.184497 systemd-journald[581]: Journal started Jan 28 00:22:32.184510 systemd-journald[581]: Runtime Journal (/run/log/journal/11e6bc912bbb408d86f4cbfb0f2357cf) is 8M, max 78.3M, 70.3M free. Jan 28 00:22:32.201476 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:22:32.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.204226 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:22:32.228622 kernel: audit: type=1130 audit(1769559752.200:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.228641 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:22:32.246396 kernel: audit: type=1130 audit(1769559752.233:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.246427 kernel: Bridge firewalling registered Jan 28 00:22:32.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.234869 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:22:32.275070 kernel: audit: type=1130 audit(1769559752.254:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.246330 systemd-modules-load[584]: Inserted module 'br_netfilter' Jan 28 00:22:32.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.255404 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:22:32.297441 kernel: audit: type=1130 audit(1769559752.278:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.294253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:22:32.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.319157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:22:32.345025 kernel: audit: type=1130 audit(1769559752.301:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.345040 kernel: audit: type=1130 audit(1769559752.323:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.326306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:22:32.364253 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:22:32.370852 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:22:32.389800 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:22:32.404892 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:22:32.409543 systemd-tmpfiles[605]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 00:22:32.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.435239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:22:32.439574 kernel: audit: type=1130 audit(1769559752.416:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.456707 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:22:32.479939 kernel: audit: type=1130 audit(1769559752.444:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.479952 kernel: audit: type=1130 audit(1769559752.461:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.479841 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:22:32.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.486796 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:22:32.509000 audit: BPF prog-id=6 op=LOAD Jan 28 00:22:32.511935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:22:32.522858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:22:32.537557 dracut-cmdline[614]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=ffbbb1f2dd4f19dd875b6fa16303680c6bcd968d1e90ec98053307c162b9a8d1 Jan 28 00:22:32.562608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:22:32.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.608646 systemd-resolved[615]: Positive Trust Anchors: Jan 28 00:22:32.608659 systemd-resolved[615]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:22:32.608662 systemd-resolved[615]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 00:22:32.608681 systemd-resolved[615]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:22:32.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.625523 systemd-resolved[615]: Defaulting to hostname 'linux'. Jan 28 00:22:32.626148 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:22:32.631255 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:22:32.720835 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:22:32.735826 kernel: iscsi: registered transport (tcp) Jan 28 00:22:32.753273 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:22:32.753300 kernel: QLogic iSCSI HBA Driver Jan 28 00:22:32.776873 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:22:32.809246 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:22:32.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.821609 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:22:32.862750 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:22:32.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.868518 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:22:32.891350 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:22:32.912112 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:22:32.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.920000 audit: BPF prog-id=7 op=LOAD Jan 28 00:22:32.921000 audit: BPF prog-id=8 op=LOAD Jan 28 00:22:32.923038 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:22:32.965346 systemd-udevd[831]: Using default interface naming scheme 'v257'. Jan 28 00:22:32.970781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:22:32.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:32.979449 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:22:33.010866 dracut-pre-trigger[895]: rd.md=0: removing MD RAID activation Jan 28 00:22:33.035005 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:22:33.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:33.046000 audit: BPF prog-id=9 op=LOAD Jan 28 00:22:33.047721 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:22:33.055863 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:22:33.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:33.071691 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:22:33.103331 systemd-networkd[990]: lo: Link UP Jan 28 00:22:33.105981 systemd-networkd[990]: lo: Gained carrier Jan 28 00:22:33.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:33.106368 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:22:33.114530 systemd[1]: Reached target network.target - Network. Jan 28 00:22:33.127365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:22:33.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:33.142492 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:22:33.192845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#12 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:22:33.224836 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 00:22:33.232677 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:22:33.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:33.232724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:22:33.243299 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:22:33.253725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:22:33.282048 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:22:33.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:33.314642 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 00:22:33.330615 systemd-networkd[990]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:22:33.330618 systemd-networkd[990]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:22:33.331969 systemd-networkd[990]: eth0: Link UP Jan 28 00:22:33.332084 systemd-networkd[990]: eth0: Gained carrier Jan 28 00:22:33.332092 systemd-networkd[990]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:22:33.345311 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:22:33.357884 systemd-networkd[990]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:22:33.367519 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 00:22:33.386205 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 00:22:33.439965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 00:22:33.547988 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:22:33.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:33.553540 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:22:33.563785 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:22:33.573987 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:22:33.585192 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:22:33.615089 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:22:33.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:34.465061 disk-uuid[1096]: Warning: The kernel is still using the old partition table. Jan 28 00:22:34.465061 disk-uuid[1096]: The new table will be used at the next reboot or after you Jan 28 00:22:34.465061 disk-uuid[1096]: run partprobe(8) or kpartx(8) Jan 28 00:22:34.465061 disk-uuid[1096]: The operation has completed successfully. Jan 28 00:22:34.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:34.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:34.474242 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:22:34.474354 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:22:34.482244 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:22:34.530845 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1217) Jan 28 00:22:34.542359 kernel: BTRFS info (device sda6): first mount of filesystem 4a39c435-7a4d-46da-88d1-24fe24a14e45 Jan 28 00:22:34.542389 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:22:34.555957 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:22:34.555988 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:22:34.564851 kernel: BTRFS info (device sda6): last unmount of filesystem 4a39c435-7a4d-46da-88d1-24fe24a14e45 Jan 28 00:22:34.565636 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:22:34.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:34.571128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:22:34.683845 kernel: hv_netvsc 7ced8d89-b24c-7ced-8d89-b24c7ced8d89 eth0: VF slot 1 added Jan 28 00:22:34.694837 kernel: hv_vmbus: registering driver hv_pci Jan 28 00:22:34.700960 kernel: hv_pci a036b620-86fc-414e-b20d-0e1d14d32581: PCI VMBus probing: Using version 0x10004 Jan 28 00:22:34.712645 kernel: hv_pci a036b620-86fc-414e-b20d-0e1d14d32581: PCI host bridge to bus 86fc:00 Jan 28 00:22:34.712796 kernel: pci_bus 86fc:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 00:22:34.712941 kernel: pci_bus 86fc:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 00:22:34.720824 kernel: pci 86fc:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 28 00:22:34.729878 kernel: pci 86fc:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 00:22:34.736928 kernel: pci 86fc:00:02.0: enabling Extended Tags Jan 28 00:22:34.750860 kernel: pci 86fc:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 86fc:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 28 00:22:34.765835 kernel: pci_bus 86fc:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 00:22:34.766000 kernel: pci 86fc:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 28 00:22:34.814366 ignition[1236]: Ignition 2.24.0 Jan 28 00:22:34.816954 ignition[1236]: Stage: fetch-offline Jan 28 00:22:34.821670 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:22:34.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:34.817131 ignition[1236]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:22:34.833006 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 00:22:34.817140 ignition[1236]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:22:34.817217 ignition[1236]: parsed url from cmdline: "" Jan 28 00:22:34.817220 ignition[1236]: no config URL provided Jan 28 00:22:34.817223 ignition[1236]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:22:34.868781 kernel: mlx5_core 86fc:00:02.0: enabling device (0000 -> 0002) Jan 28 00:22:34.868977 kernel: mlx5_core 86fc:00:02.0: PTM is not supported by PCIe Jan 28 00:22:34.869074 kernel: mlx5_core 86fc:00:02.0: firmware version: 16.30.5026 Jan 28 00:22:34.817230 ignition[1236]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:22:34.817233 ignition[1236]: failed to fetch config: resource requires networking Jan 28 00:22:34.817528 ignition[1236]: Ignition finished successfully Jan 28 00:22:34.874089 ignition[1243]: Ignition 2.24.0 Jan 28 00:22:34.874093 ignition[1243]: Stage: fetch Jan 28 00:22:34.874285 ignition[1243]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:22:34.874294 ignition[1243]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:22:34.874359 ignition[1243]: parsed url from cmdline: "" Jan 28 00:22:34.874363 ignition[1243]: no config URL provided Jan 28 00:22:34.874366 ignition[1243]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:22:34.874370 ignition[1243]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:22:34.874384 ignition[1243]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 00:22:34.947186 ignition[1243]: GET result: OK Jan 28 00:22:34.949789 ignition[1243]: config has been read from IMDS userdata Jan 28 00:22:34.949818 ignition[1243]: parsing config with SHA512: 5bff425e241bc8f16f0a48e404a0a16ff9619ad08d02cd229c75cb43506000133d2d2684c8524c1d614dac5096c1662928248a7b650b667861ef04a0e284f4af Jan 28 00:22:34.956551 unknown[1243]: fetched base config from "system" Jan 28 00:22:34.956777 ignition[1243]: fetch: fetch complete Jan 28 00:22:34.956558 unknown[1243]: fetched base config from "system" Jan 28 00:22:34.956780 ignition[1243]: fetch: fetch passed Jan 28 00:22:34.977908 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 00:22:34.977925 kernel: audit: type=1130 audit(1769559754.974:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:34.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:34.956562 unknown[1243]: fetched user config from "azure" Jan 28 00:22:34.956809 ignition[1243]: Ignition finished successfully Jan 28 00:22:34.962246 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 00:22:34.990434 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:22:35.022108 ignition[1252]: Ignition 2.24.0 Jan 28 00:22:35.022830 ignition[1252]: Stage: kargs Jan 28 00:22:35.023039 ignition[1252]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:22:35.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.027128 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:22:35.058500 kernel: audit: type=1130 audit(1769559755.032:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.023046 ignition[1252]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:22:35.049506 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:22:35.024877 ignition[1252]: kargs: kargs passed Jan 28 00:22:35.024923 ignition[1252]: Ignition finished successfully Jan 28 00:22:35.080240 ignition[1264]: Ignition 2.24.0 Jan 28 00:22:35.080254 ignition[1264]: Stage: disks Jan 28 00:22:35.083924 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:22:35.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.080475 ignition[1264]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:22:35.080482 ignition[1264]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:22:35.107189 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:22:35.130909 kernel: audit: type=1130 audit(1769559755.090:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.081184 ignition[1264]: disks: disks passed Jan 28 00:22:35.115962 systemd-networkd[990]: eth0: Gained IPv6LL Jan 28 00:22:35.081223 ignition[1264]: Ignition finished successfully Jan 28 00:22:35.160012 kernel: hv_netvsc 7ced8d89-b24c-7ced-8d89-b24c7ced8d89 eth0: VF registering: eth1 Jan 28 00:22:35.160184 kernel: mlx5_core 86fc:00:02.0 eth1: joined to eth0 Jan 28 00:22:35.116441 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:22:35.126701 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:22:35.185694 kernel: mlx5_core 86fc:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 00:22:35.185847 kernel: mlx5_core 86fc:00:02.0 enP34556s1: renamed from eth1 Jan 28 00:22:35.135097 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:22:35.142333 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:22:35.160897 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:22:35.190133 systemd-networkd[990]: eth1: Interface name change detected, renamed to enP34556s1. Jan 28 00:22:35.232768 systemd-fsck[1275]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 28 00:22:35.241322 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:22:35.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.266834 kernel: audit: type=1130 audit(1769559755.246:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.264905 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:22:35.318839 kernel: mlx5_core 86fc:00:02.0 enP34556s1: Link up Jan 28 00:22:35.386836 kernel: EXT4-fs (sda9): mounted filesystem 82b1105b-3cc1-4c68-a536-2edaf8b9c39b r/w with ordered data mode. Quota mode: none. Jan 28 00:22:35.386886 kernel: hv_netvsc 7ced8d89-b24c-7ced-8d89-b24c7ced8d89 eth0: Data path switched to VF: enP34556s1 Jan 28 00:22:35.388264 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:22:35.392245 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:22:35.399899 systemd-networkd[990]: enP34556s1: Link UP Jan 28 00:22:35.400055 systemd-networkd[990]: enP34556s1: Gained carrier Jan 28 00:22:35.412961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:22:35.427386 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:22:35.441323 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 00:22:35.459708 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1289) Jan 28 00:22:35.460306 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:22:35.486857 kernel: BTRFS info (device sda6): first mount of filesystem 4a39c435-7a4d-46da-88d1-24fe24a14e45 Jan 28 00:22:35.486876 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:22:35.460371 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:22:35.493138 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:22:35.502549 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:22:35.525678 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:22:35.525709 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:22:35.526596 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:22:35.650502 coreos-metadata[1291]: Jan 28 00:22:35.650 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 00:22:35.658603 coreos-metadata[1291]: Jan 28 00:22:35.658 INFO Fetch successful Jan 28 00:22:35.658603 coreos-metadata[1291]: Jan 28 00:22:35.658 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 00:22:35.671435 coreos-metadata[1291]: Jan 28 00:22:35.671 INFO Fetch successful Jan 28 00:22:35.675946 coreos-metadata[1291]: Jan 28 00:22:35.675 INFO wrote hostname ci-4547.1.0-n-77eb5aaac5 to /sysroot/etc/hostname Jan 28 00:22:35.682919 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:22:35.706542 kernel: audit: type=1130 audit(1769559755.687:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.978462 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:22:35.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:35.993919 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:22:36.011175 kernel: audit: type=1130 audit(1769559755.986:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:36.005937 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:22:36.032344 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:22:36.036981 kernel: BTRFS info (device sda6): last unmount of filesystem 4a39c435-7a4d-46da-88d1-24fe24a14e45 Jan 28 00:22:36.050465 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:22:36.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:36.072444 ignition[1393]: INFO : Ignition 2.24.0 Jan 28 00:22:36.072444 ignition[1393]: INFO : Stage: mount Jan 28 00:22:36.072444 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:22:36.072444 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:22:36.109134 kernel: audit: type=1130 audit(1769559756.055:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:36.109152 kernel: audit: type=1130 audit(1769559756.080:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:36.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:36.109188 ignition[1393]: INFO : mount: mount passed Jan 28 00:22:36.109188 ignition[1393]: INFO : Ignition finished successfully Jan 28 00:22:36.076802 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:22:36.082258 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:22:36.116954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:22:36.148836 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1404) Jan 28 00:22:36.158826 kernel: BTRFS info (device sda6): first mount of filesystem 4a39c435-7a4d-46da-88d1-24fe24a14e45 Jan 28 00:22:36.158855 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:22:36.167849 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:22:36.167873 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:22:36.169283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:22:36.192846 ignition[1422]: INFO : Ignition 2.24.0 Jan 28 00:22:36.192846 ignition[1422]: INFO : Stage: files Jan 28 00:22:36.192846 ignition[1422]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:22:36.192846 ignition[1422]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:22:36.192846 ignition[1422]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:22:36.212047 ignition[1422]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:22:36.212047 ignition[1422]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:22:36.225933 ignition[1422]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:22:36.231565 ignition[1422]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:22:36.231565 ignition[1422]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:22:36.226280 unknown[1422]: wrote ssh authorized keys file for user: core Jan 28 00:22:36.249064 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 00:22:36.256870 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 28 00:22:36.286570 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:22:36.517940 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 00:22:36.525575 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:22:36.525575 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:22:36.525575 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:22:36.525575 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:22:36.525575 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:22:36.525575 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:22:36.525575 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:22:36.525575 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:22:36.580548 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:22:36.580548 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:22:36.580548 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:22:36.580548 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:22:36.580548 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:22:36.580548 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 28 00:22:37.106061 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 00:22:37.356257 ignition[1422]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:22:37.356257 ignition[1422]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 00:22:37.369891 ignition[1422]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:22:37.382306 ignition[1422]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:22:37.382306 ignition[1422]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 00:22:37.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.409941 ignition[1422]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:22:37.409941 ignition[1422]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:22:37.409941 ignition[1422]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:22:37.409941 ignition[1422]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:22:37.409941 ignition[1422]: INFO : files: files passed Jan 28 00:22:37.409941 ignition[1422]: INFO : Ignition finished successfully Jan 28 00:22:37.454157 kernel: audit: type=1130 audit(1769559757.394:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.389745 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:22:37.411517 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:22:37.440097 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:22:37.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.451059 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:22:37.489757 kernel: audit: type=1130 audit(1769559757.471:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.458036 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:22:37.496533 initrd-setup-root-after-ignition[1452]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:22:37.496533 initrd-setup-root-after-ignition[1452]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:22:37.514250 initrd-setup-root-after-ignition[1456]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:22:37.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.503076 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:22:37.514256 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:22:37.519697 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:22:37.566086 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:22:37.566178 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:22:37.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.575299 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:22:37.583876 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:22:37.592228 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:22:37.592826 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:22:37.624416 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:22:37.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.630505 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:22:37.654456 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:22:37.654601 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:22:37.663874 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:22:37.673224 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:22:37.681029 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:22:37.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.681109 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:22:37.692540 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:22:37.696706 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:22:37.704926 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:22:37.713069 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:22:37.721676 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:22:37.730507 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:22:37.739402 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:22:37.748145 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:22:37.757444 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:22:37.765560 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:22:37.774528 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:22:37.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.781831 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:22:37.781910 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:22:37.792769 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:22:37.797246 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:22:37.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.805678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:22:37.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.809833 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:22:37.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.814898 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:22:37.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.814969 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:22:37.828036 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:22:37.828113 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:22:37.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.833301 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:22:37.833361 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:22:37.840878 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 00:22:37.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.903632 ignition[1476]: INFO : Ignition 2.24.0 Jan 28 00:22:37.903632 ignition[1476]: INFO : Stage: umount Jan 28 00:22:37.903632 ignition[1476]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:22:37.903632 ignition[1476]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:22:37.903632 ignition[1476]: INFO : umount: umount passed Jan 28 00:22:37.903632 ignition[1476]: INFO : Ignition finished successfully Jan 28 00:22:37.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.840945 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:22:37.851354 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:22:37.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.863910 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:22:37.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.864016 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:22:37.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.881760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:22:37.890252 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:22:37.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.893846 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:22:37.900973 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:22:37.901052 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:22:37.908788 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:22:37.908872 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:22:37.919980 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:22:37.921832 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:22:37.936261 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:22:38.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.937425 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:22:38.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.937602 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:22:37.948260 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:22:37.948300 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:22:37.958069 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 00:22:38.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.958110 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 00:22:37.965682 systemd[1]: Stopped target network.target - Network. Jan 28 00:22:38.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.098000 audit: BPF prog-id=9 op=UNLOAD Jan 28 00:22:37.973602 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:22:37.973649 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:22:38.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:37.982774 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:22:37.991426 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:22:38.121000 audit: BPF prog-id=6 op=UNLOAD Jan 28 00:22:37.994833 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:22:38.001048 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:22:38.008381 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:22:38.016306 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:22:38.016343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:22:38.023703 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:22:38.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.023732 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:22:38.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.032267 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 00:22:38.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.032286 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 00:22:38.040870 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:22:38.040921 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:22:38.048416 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:22:38.048446 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:22:38.055962 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:22:38.064977 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:22:38.073889 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:22:38.073958 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:22:38.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.091154 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:22:38.091256 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:22:38.102963 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:22:38.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.104851 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:22:38.118139 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 00:22:38.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.126701 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:22:38.291668 kernel: hv_netvsc 7ced8d89-b24c-7ced-8d89-b24c7ced8d89 eth0: Data path switched from VF: enP34556s1 Jan 28 00:22:38.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.126738 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:22:38.140929 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:22:38.153080 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:22:38.153137 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:22:38.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.162190 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:22:38.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.162225 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:22:38.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.174773 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:22:38.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.174827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:22:38.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.183548 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:22:38.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.219112 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:22:38.224709 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:22:38.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:38.233996 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:22:38.234030 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:22:38.243418 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:22:38.243442 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:22:38.251186 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:22:38.251224 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:22:38.262882 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:22:38.262917 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:22:38.281718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:22:38.281764 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:22:38.293942 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:22:38.305469 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 00:22:38.305631 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:22:38.314430 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:22:38.314466 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:22:38.461199 systemd-journald[581]: Received SIGTERM from PID 1 (systemd). Jan 28 00:22:38.324563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:22:38.324595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:22:38.333710 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:22:38.333811 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:22:38.344862 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:22:38.345005 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:22:38.352757 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:22:38.352942 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:22:38.362360 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:22:38.371336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:22:38.371393 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:22:38.379984 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:22:38.403292 systemd[1]: Switching root. Jan 28 00:22:38.511859 systemd-journald[581]: Journal stopped Jan 28 00:22:40.591241 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:22:40.591258 kernel: SELinux: policy capability open_perms=1 Jan 28 00:22:40.591266 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:22:40.591272 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:22:40.591279 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:22:40.591286 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:22:40.591292 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:22:40.591298 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:22:40.591304 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 00:22:40.591310 systemd[1]: Successfully loaded SELinux policy in 82.264ms. Jan 28 00:22:40.591318 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.314ms. Jan 28 00:22:40.591325 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:22:40.591331 systemd[1]: Detected virtualization microsoft. Jan 28 00:22:40.591337 systemd[1]: Detected architecture arm64. Jan 28 00:22:40.591345 systemd[1]: Detected first boot. Jan 28 00:22:40.591352 systemd[1]: Hostname set to . Jan 28 00:22:40.591358 systemd[1]: Initializing machine ID from random generator. Jan 28 00:22:40.591364 zram_generator::config[1518]: No configuration found. Jan 28 00:22:40.591371 kernel: NET: Registered PF_VSOCK protocol family Jan 28 00:22:40.591378 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:22:40.591384 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:22:40.591391 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:22:40.591397 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:22:40.591404 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:22:40.591411 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:22:40.591418 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:22:40.591425 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:22:40.591432 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:22:40.591438 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:22:40.591445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:22:40.591451 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:22:40.591458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:22:40.591465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:22:40.591471 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:22:40.591478 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:22:40.591484 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:22:40.591491 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:22:40.591497 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 28 00:22:40.591505 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:22:40.591511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:22:40.591519 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:22:40.591526 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:22:40.591532 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:22:40.591539 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:22:40.591546 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:22:40.591552 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:22:40.591559 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 00:22:40.591566 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:22:40.591573 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:22:40.591580 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:22:40.591586 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:22:40.591594 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 00:22:40.591600 kernel: kauditd_printk_skb: 59 callbacks suppressed Jan 28 00:22:40.591607 kernel: audit: type=1335 audit(1769559760.109:103): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 00:22:40.591614 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 00:22:40.591621 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 00:22:40.591627 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:22:40.591634 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 00:22:40.591641 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 00:22:40.591647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:22:40.591654 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:22:40.591661 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:22:40.591668 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:22:40.591674 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:22:40.591681 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:22:40.591687 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:22:40.591694 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:22:40.591701 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:22:40.591709 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:22:40.591716 systemd[1]: Reached target machines.target - Containers. Jan 28 00:22:40.591722 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:22:40.591729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:22:40.591736 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:22:40.591743 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:22:40.591750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:22:40.591757 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:22:40.591763 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:22:40.591770 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:22:40.591777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:22:40.591783 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:22:40.591790 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:22:40.591798 kernel: ACPI: bus type drm_connector registered Jan 28 00:22:40.591804 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:22:40.591810 kernel: fuse: init (API version 7.41) Jan 28 00:22:40.591827 kernel: audit: type=1131 audit(1769559760.449:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.591834 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:22:40.591841 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:22:40.591849 kernel: audit: type=1131 audit(1769559760.473:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.591855 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:22:40.591863 kernel: audit: type=1334 audit(1769559760.496:106): prog-id=14 op=UNLOAD Jan 28 00:22:40.591870 kernel: audit: type=1334 audit(1769559760.496:107): prog-id=13 op=UNLOAD Jan 28 00:22:40.591876 kernel: audit: type=1334 audit(1769559760.501:108): prog-id=15 op=LOAD Jan 28 00:22:40.591882 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:22:40.591889 kernel: audit: type=1334 audit(1769559760.501:109): prog-id=16 op=LOAD Jan 28 00:22:40.591895 kernel: audit: type=1334 audit(1769559760.501:110): prog-id=17 op=LOAD Jan 28 00:22:40.591902 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:22:40.591909 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:22:40.591915 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:22:40.591933 systemd-journald[1623]: Collecting audit messages is enabled. Jan 28 00:22:40.591949 kernel: audit: type=1305 audit(1769559760.589:111): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 00:22:40.591955 systemd-journald[1623]: Journal started Jan 28 00:22:40.591970 systemd-journald[1623]: Runtime Journal (/run/log/journal/a4246c29413a464d8839666c055045a0) is 8M, max 78.3M, 70.3M free. Jan 28 00:22:40.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.496000 audit: BPF prog-id=14 op=UNLOAD Jan 28 00:22:40.496000 audit: BPF prog-id=13 op=UNLOAD Jan 28 00:22:40.501000 audit: BPF prog-id=15 op=LOAD Jan 28 00:22:40.501000 audit: BPF prog-id=16 op=LOAD Jan 28 00:22:40.501000 audit: BPF prog-id=17 op=LOAD Jan 28 00:22:40.589000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 00:22:39.793386 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:22:39.800227 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 28 00:22:40.589000 audit[1623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc1cc81e0 a2=4000 a3=0 items=0 ppid=1 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:22:39.800596 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:22:39.801968 systemd[1]: systemd-journald.service: Consumed 2.427s CPU time. Jan 28 00:22:40.631384 kernel: audit: type=1300 audit(1769559760.589:111): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc1cc81e0 a2=4000 a3=0 items=0 ppid=1 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:22:40.589000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 00:22:40.635831 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 00:22:40.654656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:22:40.664825 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:22:40.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.665631 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:22:40.670700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:22:40.675634 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:22:40.680002 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:22:40.684668 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:22:40.689693 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:22:40.693998 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:22:40.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.699381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:22:40.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.705291 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:22:40.705416 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:22:40.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.711106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:22:40.711223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:22:40.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.716328 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:22:40.716443 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:22:40.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.721350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:22:40.721469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:22:40.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.726948 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:22:40.727055 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:22:40.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.732030 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:22:40.732151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:22:40.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.737266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:22:40.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.742390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:22:40.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.748439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:22:40.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.754501 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 00:22:40.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.761001 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:22:40.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.774704 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:22:40.779880 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 00:22:40.786273 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:22:40.796856 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:22:40.802064 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:22:40.802090 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:22:40.807178 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 00:22:40.812807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:22:40.812900 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 00:22:40.815916 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:22:40.821359 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:22:40.826623 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:22:40.827272 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:22:40.832323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:22:40.833966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:22:40.839948 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:22:40.848353 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:22:40.854399 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:22:40.861634 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:22:40.863321 systemd-journald[1623]: Time spent on flushing to /var/log/journal/a4246c29413a464d8839666c055045a0 is 25.407ms for 1061 entries. Jan 28 00:22:40.863321 systemd-journald[1623]: System Journal (/var/log/journal/a4246c29413a464d8839666c055045a0) is 8M, max 2.2G, 2.2G free. Jan 28 00:22:40.987191 systemd-journald[1623]: Received client request to flush runtime journal. Jan 28 00:22:40.987244 kernel: loop1: detected capacity change from 0 to 207008 Jan 28 00:22:40.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.874845 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:22:40.882866 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:22:40.894950 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 00:22:40.901192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:22:40.982171 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:22:40.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.990088 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:22:40.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:40.996000 audit: BPF prog-id=18 op=LOAD Jan 28 00:22:40.996000 audit: BPF prog-id=19 op=LOAD Jan 28 00:22:40.996000 audit: BPF prog-id=20 op=LOAD Jan 28 00:22:40.998507 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 00:22:41.003000 audit: BPF prog-id=21 op=LOAD Jan 28 00:22:41.005148 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:22:41.010677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:22:41.019000 audit: BPF prog-id=22 op=LOAD Jan 28 00:22:41.019000 audit: BPF prog-id=23 op=LOAD Jan 28 00:22:41.019000 audit: BPF prog-id=24 op=LOAD Jan 28 00:22:41.021487 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 00:22:41.027000 audit: BPF prog-id=25 op=LOAD Jan 28 00:22:41.028000 audit: BPF prog-id=26 op=LOAD Jan 28 00:22:41.028000 audit: BPF prog-id=27 op=LOAD Jan 28 00:22:41.029967 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:22:41.050532 kernel: loop2: detected capacity change from 0 to 27544 Jan 28 00:22:41.044948 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 00:22:41.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.080392 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:22:41.087106 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jan 28 00:22:41.087121 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jan 28 00:22:41.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.094043 systemd-nsresourced[1677]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 00:22:41.094924 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:22:41.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.104264 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 00:22:41.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.164897 kernel: loop3: detected capacity change from 0 to 45344 Jan 28 00:22:41.168501 systemd-oomd[1674]: No swap; memory pressure usage will be degraded Jan 28 00:22:41.169036 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 00:22:41.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.192729 systemd-resolved[1675]: Positive Trust Anchors: Jan 28 00:22:41.192977 systemd-resolved[1675]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:22:41.193021 systemd-resolved[1675]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 00:22:41.193077 systemd-resolved[1675]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:22:41.213147 systemd-resolved[1675]: Using system hostname 'ci-4547.1.0-n-77eb5aaac5'. Jan 28 00:22:41.214210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:22:41.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.219200 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:22:41.268833 kernel: loop4: detected capacity change from 0 to 100192 Jan 28 00:22:41.331725 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:22:41.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.336000 audit: BPF prog-id=8 op=UNLOAD Jan 28 00:22:41.336000 audit: BPF prog-id=7 op=UNLOAD Jan 28 00:22:41.337000 audit: BPF prog-id=28 op=LOAD Jan 28 00:22:41.337000 audit: BPF prog-id=29 op=LOAD Jan 28 00:22:41.338747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:22:41.364437 systemd-udevd[1701]: Using default interface naming scheme 'v257'. Jan 28 00:22:41.375855 kernel: loop5: detected capacity change from 0 to 207008 Jan 28 00:22:41.393053 kernel: loop6: detected capacity change from 0 to 27544 Jan 28 00:22:41.405843 kernel: loop7: detected capacity change from 0 to 45344 Jan 28 00:22:41.419853 kernel: loop1: detected capacity change from 0 to 100192 Jan 28 00:22:41.430394 (sd-merge)[1703]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 28 00:22:41.433890 (sd-merge)[1703]: Merged extensions into '/usr'. Jan 28 00:22:41.437751 systemd[1]: Reload requested from client PID 1659 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:22:41.437770 systemd[1]: Reloading... Jan 28 00:22:41.521859 zram_generator::config[1759]: No configuration found. Jan 28 00:22:41.555966 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:22:41.563857 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#297 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:22:41.630868 kernel: hv_vmbus: registering driver hv_balloon Jan 28 00:22:41.637883 kernel: hv_vmbus: registering driver hyperv_fb Jan 28 00:22:41.637906 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 28 00:22:41.644829 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 28 00:22:41.653887 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 28 00:22:41.653936 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 28 00:22:41.660504 kernel: Console: switching to colour dummy device 80x25 Jan 28 00:22:41.667508 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 00:22:41.712932 kernel: MACsec IEEE 802.1AE Jan 28 00:22:41.772071 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:22:41.772198 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 28 00:22:41.772305 systemd[1]: Reloading finished in 334 ms. Jan 28 00:22:41.794854 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:22:41.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.802445 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:22:41.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:41.862910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 00:22:41.878695 systemd[1]: Starting ensure-sysext.service... Jan 28 00:22:41.882939 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:22:41.913000 audit: BPF prog-id=30 op=LOAD Jan 28 00:22:41.915139 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:22:41.920177 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:22:41.927932 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:22:41.937000 audit: BPF prog-id=31 op=LOAD Jan 28 00:22:41.937000 audit: BPF prog-id=15 op=UNLOAD Jan 28 00:22:41.937000 audit: BPF prog-id=32 op=LOAD Jan 28 00:22:41.937000 audit: BPF prog-id=33 op=LOAD Jan 28 00:22:41.937000 audit: BPF prog-id=16 op=UNLOAD Jan 28 00:22:41.937000 audit: BPF prog-id=17 op=UNLOAD Jan 28 00:22:41.937000 audit: BPF prog-id=34 op=LOAD Jan 28 00:22:41.937000 audit: BPF prog-id=21 op=UNLOAD Jan 28 00:22:41.938000 audit: BPF prog-id=35 op=LOAD Jan 28 00:22:41.938000 audit: BPF prog-id=22 op=UNLOAD Jan 28 00:22:41.938000 audit: BPF prog-id=36 op=LOAD Jan 28 00:22:41.938000 audit: BPF prog-id=37 op=LOAD Jan 28 00:22:41.938000 audit: BPF prog-id=23 op=UNLOAD Jan 28 00:22:41.938000 audit: BPF prog-id=24 op=UNLOAD Jan 28 00:22:41.939000 audit: BPF prog-id=38 op=LOAD Jan 28 00:22:41.940000 audit: BPF prog-id=25 op=UNLOAD Jan 28 00:22:41.940000 audit: BPF prog-id=39 op=LOAD Jan 28 00:22:41.940000 audit: BPF prog-id=40 op=LOAD Jan 28 00:22:41.940000 audit: BPF prog-id=26 op=UNLOAD Jan 28 00:22:41.940000 audit: BPF prog-id=27 op=UNLOAD Jan 28 00:22:41.940000 audit: BPF prog-id=41 op=LOAD Jan 28 00:22:41.940000 audit: BPF prog-id=42 op=LOAD Jan 28 00:22:41.940000 audit: BPF prog-id=28 op=UNLOAD Jan 28 00:22:41.940000 audit: BPF prog-id=29 op=UNLOAD Jan 28 00:22:41.941000 audit: BPF prog-id=43 op=LOAD Jan 28 00:22:41.941000 audit: BPF prog-id=18 op=UNLOAD Jan 28 00:22:41.941000 audit: BPF prog-id=44 op=LOAD Jan 28 00:22:41.941000 audit: BPF prog-id=45 op=LOAD Jan 28 00:22:41.941000 audit: BPF prog-id=19 op=UNLOAD Jan 28 00:22:41.941000 audit: BPF prog-id=20 op=UNLOAD Jan 28 00:22:41.947932 systemd-tmpfiles[1904]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 00:22:41.948509 systemd-tmpfiles[1904]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 00:22:41.948660 systemd[1]: Reload requested from client PID 1901 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:22:41.948674 systemd[1]: Reloading... Jan 28 00:22:41.949052 systemd-tmpfiles[1904]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:22:41.950222 systemd-tmpfiles[1904]: ACLs are not supported, ignoring. Jan 28 00:22:41.950254 systemd-tmpfiles[1904]: ACLs are not supported, ignoring. Jan 28 00:22:41.965413 systemd-tmpfiles[1904]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:22:41.965863 systemd-tmpfiles[1904]: Skipping /boot Jan 28 00:22:41.974131 systemd-tmpfiles[1904]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:22:41.974140 systemd-tmpfiles[1904]: Skipping /boot Jan 28 00:22:42.019499 systemd-networkd[1903]: lo: Link UP Jan 28 00:22:42.021113 systemd-networkd[1903]: lo: Gained carrier Jan 28 00:22:42.022571 systemd-networkd[1903]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:22:42.022691 systemd-networkd[1903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:22:42.027836 zram_generator::config[1949]: No configuration found. Jan 28 00:22:42.066836 kernel: mlx5_core 86fc:00:02.0 enP34556s1: Link up Jan 28 00:22:42.088570 systemd-networkd[1903]: enP34556s1: Link UP Jan 28 00:22:42.088844 kernel: hv_netvsc 7ced8d89-b24c-7ced-8d89-b24c7ced8d89 eth0: Data path switched to VF: enP34556s1 Jan 28 00:22:42.088668 systemd-networkd[1903]: eth0: Link UP Jan 28 00:22:42.088670 systemd-networkd[1903]: eth0: Gained carrier Jan 28 00:22:42.088684 systemd-networkd[1903]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:22:42.093040 systemd-networkd[1903]: enP34556s1: Gained carrier Jan 28 00:22:42.098859 systemd-networkd[1903]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:22:42.174518 systemd[1]: Reloading finished in 225 ms. Jan 28 00:22:42.200830 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:22:42.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.206000 audit: BPF prog-id=46 op=LOAD Jan 28 00:22:42.206000 audit: BPF prog-id=35 op=UNLOAD Jan 28 00:22:42.206000 audit: BPF prog-id=47 op=LOAD Jan 28 00:22:42.206000 audit: BPF prog-id=48 op=LOAD Jan 28 00:22:42.206000 audit: BPF prog-id=36 op=UNLOAD Jan 28 00:22:42.206000 audit: BPF prog-id=37 op=UNLOAD Jan 28 00:22:42.207000 audit: BPF prog-id=49 op=LOAD Jan 28 00:22:42.207000 audit: BPF prog-id=38 op=UNLOAD Jan 28 00:22:42.207000 audit: BPF prog-id=50 op=LOAD Jan 28 00:22:42.207000 audit: BPF prog-id=51 op=LOAD Jan 28 00:22:42.207000 audit: BPF prog-id=39 op=UNLOAD Jan 28 00:22:42.207000 audit: BPF prog-id=40 op=UNLOAD Jan 28 00:22:42.207000 audit: BPF prog-id=52 op=LOAD Jan 28 00:22:42.207000 audit: BPF prog-id=34 op=UNLOAD Jan 28 00:22:42.216000 audit: BPF prog-id=53 op=LOAD Jan 28 00:22:42.216000 audit: BPF prog-id=31 op=UNLOAD Jan 28 00:22:42.216000 audit: BPF prog-id=54 op=LOAD Jan 28 00:22:42.216000 audit: BPF prog-id=55 op=LOAD Jan 28 00:22:42.217000 audit: BPF prog-id=32 op=UNLOAD Jan 28 00:22:42.217000 audit: BPF prog-id=33 op=UNLOAD Jan 28 00:22:42.217000 audit: BPF prog-id=56 op=LOAD Jan 28 00:22:42.217000 audit: BPF prog-id=43 op=UNLOAD Jan 28 00:22:42.217000 audit: BPF prog-id=57 op=LOAD Jan 28 00:22:42.217000 audit: BPF prog-id=58 op=LOAD Jan 28 00:22:42.217000 audit: BPF prog-id=44 op=UNLOAD Jan 28 00:22:42.217000 audit: BPF prog-id=45 op=UNLOAD Jan 28 00:22:42.218000 audit: BPF prog-id=59 op=LOAD Jan 28 00:22:42.218000 audit: BPF prog-id=60 op=LOAD Jan 28 00:22:42.218000 audit: BPF prog-id=41 op=UNLOAD Jan 28 00:22:42.218000 audit: BPF prog-id=42 op=UNLOAD Jan 28 00:22:42.218000 audit: BPF prog-id=61 op=LOAD Jan 28 00:22:42.218000 audit: BPF prog-id=30 op=UNLOAD Jan 28 00:22:42.221006 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:22:42.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.226779 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:22:42.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.234032 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:22:42.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.244131 systemd[1]: Reached target network.target - Network. Jan 28 00:22:42.249335 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:22:42.262460 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:22:42.269994 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:22:42.278166 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:22:42.292896 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 00:22:42.301956 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:22:42.314619 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:22:42.322786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:22:42.331003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:22:42.336000 audit[2023]: SYSTEM_BOOT pid=2023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.338065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:22:42.349433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:22:42.356480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:22:42.356633 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 00:22:42.356706 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:22:42.357851 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:22:42.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.364465 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 00:22:42.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.370660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:22:42.370811 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:22:42.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.376517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:22:42.376666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:22:42.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.382945 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:22:42.383092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:22:42.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:22:42.397038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:22:42.398174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:22:42.405579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:22:42.413998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:22:42.418194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:22:42.418323 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 00:22:42.418383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:22:42.420172 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:22:42.425572 augenrules[2042]: No rules Jan 28 00:22:42.424000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 00:22:42.424000 audit[2042]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe9e86c80 a2=420 a3=0 items=0 ppid=2004 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:22:42.424000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 00:22:42.426242 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:22:42.426442 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:22:42.430724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:22:42.434987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:22:42.441145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:22:42.441281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:22:42.448294 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:22:42.448543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:22:42.464293 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:22:42.468359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:22:42.474253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:22:42.482717 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:22:42.490285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:22:42.498542 augenrules[2052]: /sbin/augenrules: No change Jan 28 00:22:42.499991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:22:42.504927 augenrules[2073]: No rules Jan 28 00:22:42.503000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 00:22:42.503000 audit[2073]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe65f7700 a2=420 a3=0 items=0 ppid=2052 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:22:42.503000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 00:22:42.503000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 00:22:42.503000 audit[2073]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe65f9b80 a2=420 a3=0 items=0 ppid=2052 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:22:42.503000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 00:22:42.505982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:22:42.506120 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 00:22:42.506192 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:22:42.506288 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:22:42.511883 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:22:42.513157 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:22:42.518340 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:22:42.518483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:22:42.523961 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:22:42.524088 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:22:42.529173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:22:42.529308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:22:42.535025 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:22:42.535146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:22:42.542350 systemd[1]: Finished ensure-sysext.service. Jan 28 00:22:42.548629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:22:42.548770 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:22:42.579544 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:22:42.585347 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:22:43.550640 ldconfig[2006]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:22:43.562886 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:22:43.568616 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:22:43.583730 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:22:43.588493 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:22:43.592940 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:22:43.597844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:22:43.603266 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:22:43.607732 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:22:43.612690 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 00:22:43.617864 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 00:22:43.622337 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:22:43.627466 systemd-networkd[1903]: eth0: Gained IPv6LL Jan 28 00:22:43.627813 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:22:43.627865 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:22:43.631504 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:22:43.641984 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:22:43.647357 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:22:43.652546 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 00:22:43.657706 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 00:22:43.662864 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 00:22:43.675354 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:22:43.679676 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 00:22:43.686888 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:22:43.692130 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:22:43.696666 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:22:43.701210 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:22:43.705060 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:22:43.708866 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:22:43.708891 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:22:43.710645 systemd[1]: Starting chronyd.service - NTP client/server... Jan 28 00:22:43.720911 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:22:43.727945 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 00:22:43.733272 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:22:43.741512 chronyd[2091]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 28 00:22:43.743139 chronyd[2091]: Timezone right/UTC failed leap second check, ignoring Jan 28 00:22:43.743528 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:22:43.748720 chronyd[2091]: Loaded seccomp filter (level 2) Jan 28 00:22:43.750636 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:22:43.756464 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:22:43.762137 jq[2099]: false Jan 28 00:22:43.762381 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:22:43.774973 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 28 00:22:43.779420 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 28 00:22:43.780184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:22:43.788498 KVP[2101]: KVP starting; pid is:2101 Jan 28 00:22:43.789525 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:22:43.791925 extend-filesystems[2100]: Found /dev/sda6 Jan 28 00:22:43.801892 kernel: hv_utils: KVP IC version 4.0 Jan 28 00:22:43.801937 extend-filesystems[2100]: Found /dev/sda9 Jan 28 00:22:43.801937 extend-filesystems[2100]: Checking size of /dev/sda9 Jan 28 00:22:43.797298 KVP[2101]: KVP LIC Version: 3.1 Jan 28 00:22:43.803933 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:22:43.811921 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:22:43.822364 extend-filesystems[2100]: Resized partition /dev/sda9 Jan 28 00:22:43.826651 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:22:43.835219 extend-filesystems[2119]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 00:22:43.841949 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:22:43.856801 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Jan 28 00:22:43.856932 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Jan 28 00:22:43.889982 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:22:43.897060 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:22:43.897398 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:22:43.907660 extend-filesystems[2119]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 28 00:22:43.907660 extend-filesystems[2119]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 28 00:22:43.907660 extend-filesystems[2119]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Jan 28 00:22:43.901225 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:22:43.964311 extend-filesystems[2100]: Resized filesystem in /dev/sda9 Jan 28 00:22:43.915000 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:22:43.968406 update_engine[2166]: I20260128 00:22:43.952473 2166 main.cc:92] Flatcar Update Engine starting Jan 28 00:22:43.928296 systemd[1]: Started chronyd.service - NTP client/server. Jan 28 00:22:43.969369 jq[2172]: true Jan 28 00:22:43.942388 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:22:43.954090 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:22:43.954267 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:22:43.954460 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:22:43.954599 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:22:43.969291 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:22:43.969473 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:22:43.977452 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:22:43.988615 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:22:43.991038 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:22:44.016439 jq[2197]: true Jan 28 00:22:44.046080 tar[2196]: linux-arm64/LICENSE Jan 28 00:22:44.046240 tar[2196]: linux-arm64/helm Jan 28 00:22:44.049437 systemd-logind[2162]: New seat seat0. Jan 28 00:22:44.053508 systemd-logind[2162]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 28 00:22:44.053691 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:22:44.086255 bash[2230]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:22:44.087784 dbus-daemon[2094]: [system] SELinux support is enabled Jan 28 00:22:44.088088 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:22:44.098200 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:22:44.105988 update_engine[2166]: I20260128 00:22:44.105952 2166 update_check_scheduler.cc:74] Next update check in 3m40s Jan 28 00:22:44.106571 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 00:22:44.106652 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:22:44.106674 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:22:44.112734 dbus-daemon[2094]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 00:22:44.115989 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:22:44.116010 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:22:44.126301 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:22:44.141560 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:22:44.156777 coreos-metadata[2093]: Jan 28 00:22:44.156 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 00:22:44.164315 coreos-metadata[2093]: Jan 28 00:22:44.164 INFO Fetch successful Jan 28 00:22:44.164315 coreos-metadata[2093]: Jan 28 00:22:44.164 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 28 00:22:44.168482 coreos-metadata[2093]: Jan 28 00:22:44.168 INFO Fetch successful Jan 28 00:22:44.168482 coreos-metadata[2093]: Jan 28 00:22:44.168 INFO Fetching http://168.63.129.16/machine/e5052731-3b6a-4a96-8fc8-7a25d2243c49/49fc6055%2D1f6d%2D4c24%2Da3f0%2D09917b7fa9f5.%5Fci%2D4547.1.0%2Dn%2D77eb5aaac5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 28 00:22:44.169485 coreos-metadata[2093]: Jan 28 00:22:44.169 INFO Fetch successful Jan 28 00:22:44.169485 coreos-metadata[2093]: Jan 28 00:22:44.169 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 28 00:22:44.177840 coreos-metadata[2093]: Jan 28 00:22:44.177 INFO Fetch successful Jan 28 00:22:44.219451 containerd[2198]: time="2026-01-28T00:22:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 00:22:44.220183 containerd[2198]: time="2026-01-28T00:22:44.220159992Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 00:22:44.226954 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 00:22:44.233038 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:22:44.243281 containerd[2198]: time="2026-01-28T00:22:44.243251896Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.888µs" Jan 28 00:22:44.243281 containerd[2198]: time="2026-01-28T00:22:44.243278560Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 00:22:44.243355 containerd[2198]: time="2026-01-28T00:22:44.243311064Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 00:22:44.243355 containerd[2198]: time="2026-01-28T00:22:44.243319344Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 00:22:44.243448 containerd[2198]: time="2026-01-28T00:22:44.243434640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 00:22:44.243466 containerd[2198]: time="2026-01-28T00:22:44.243449536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:22:44.243500 containerd[2198]: time="2026-01-28T00:22:44.243489696Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:22:44.244921 containerd[2198]: time="2026-01-28T00:22:44.244898272Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245107 containerd[2198]: time="2026-01-28T00:22:44.245088096Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245132 containerd[2198]: time="2026-01-28T00:22:44.245106288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245132 containerd[2198]: time="2026-01-28T00:22:44.245115176Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245132 containerd[2198]: time="2026-01-28T00:22:44.245121168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245284 containerd[2198]: time="2026-01-28T00:22:44.245268248Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245284 containerd[2198]: time="2026-01-28T00:22:44.245281976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245348 containerd[2198]: time="2026-01-28T00:22:44.245337208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245473 containerd[2198]: time="2026-01-28T00:22:44.245459128Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245490 containerd[2198]: time="2026-01-28T00:22:44.245482712Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:22:44.245504 containerd[2198]: time="2026-01-28T00:22:44.245489840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 00:22:44.245607 containerd[2198]: time="2026-01-28T00:22:44.245513664Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 00:22:44.245671 containerd[2198]: time="2026-01-28T00:22:44.245658152Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 00:22:44.245726 containerd[2198]: time="2026-01-28T00:22:44.245714704Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:22:44.249905 locksmithd[2240]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:22:44.262859 containerd[2198]: time="2026-01-28T00:22:44.262830952Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 00:22:44.262924 containerd[2198]: time="2026-01-28T00:22:44.262875720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 00:22:44.262960 containerd[2198]: time="2026-01-28T00:22:44.262944024Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 00:22:44.262960 containerd[2198]: time="2026-01-28T00:22:44.262957176Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 00:22:44.262992 containerd[2198]: time="2026-01-28T00:22:44.262974016Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 00:22:44.262992 containerd[2198]: time="2026-01-28T00:22:44.262982224Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 00:22:44.262992 containerd[2198]: time="2026-01-28T00:22:44.262989720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 00:22:44.263034 containerd[2198]: time="2026-01-28T00:22:44.262996120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 00:22:44.263034 containerd[2198]: time="2026-01-28T00:22:44.263004336Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 00:22:44.263034 containerd[2198]: time="2026-01-28T00:22:44.263011224Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 00:22:44.263034 containerd[2198]: time="2026-01-28T00:22:44.263017616Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 00:22:44.263034 containerd[2198]: time="2026-01-28T00:22:44.263024840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 00:22:44.263034 containerd[2198]: time="2026-01-28T00:22:44.263031104Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 00:22:44.263106 containerd[2198]: time="2026-01-28T00:22:44.263038576Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 00:22:44.263144 containerd[2198]: time="2026-01-28T00:22:44.263129896Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 00:22:44.263166 containerd[2198]: time="2026-01-28T00:22:44.263152936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 00:22:44.263166 containerd[2198]: time="2026-01-28T00:22:44.263162592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 00:22:44.263196 containerd[2198]: time="2026-01-28T00:22:44.263169184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 00:22:44.263196 containerd[2198]: time="2026-01-28T00:22:44.263175528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 00:22:44.263196 containerd[2198]: time="2026-01-28T00:22:44.263181232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 00:22:44.263196 containerd[2198]: time="2026-01-28T00:22:44.263188168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 00:22:44.263196 containerd[2198]: time="2026-01-28T00:22:44.263195280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 00:22:44.263253 containerd[2198]: time="2026-01-28T00:22:44.263202472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 00:22:44.263253 containerd[2198]: time="2026-01-28T00:22:44.263209056Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 00:22:44.263253 containerd[2198]: time="2026-01-28T00:22:44.263215208Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 00:22:44.263291 containerd[2198]: time="2026-01-28T00:22:44.263252904Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 00:22:44.263291 containerd[2198]: time="2026-01-28T00:22:44.263279808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 00:22:44.263291 containerd[2198]: time="2026-01-28T00:22:44.263288624Z" level=info msg="Start snapshots syncer" Jan 28 00:22:44.263324 containerd[2198]: time="2026-01-28T00:22:44.263307072Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 00:22:44.263499 containerd[2198]: time="2026-01-28T00:22:44.263471328Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 00:22:44.263622 containerd[2198]: time="2026-01-28T00:22:44.263508424Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 00:22:44.263622 containerd[2198]: time="2026-01-28T00:22:44.263542688Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 00:22:44.263653 containerd[2198]: time="2026-01-28T00:22:44.263625776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 00:22:44.263653 containerd[2198]: time="2026-01-28T00:22:44.263643376Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 00:22:44.263653 containerd[2198]: time="2026-01-28T00:22:44.263649888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 00:22:44.263687 containerd[2198]: time="2026-01-28T00:22:44.263656312Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 00:22:44.263687 containerd[2198]: time="2026-01-28T00:22:44.263664248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 00:22:44.263687 containerd[2198]: time="2026-01-28T00:22:44.263671920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 00:22:44.263687 containerd[2198]: time="2026-01-28T00:22:44.263678304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 00:22:44.263687 containerd[2198]: time="2026-01-28T00:22:44.263684680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263691248Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263720288Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263728352Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263733536Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263739888Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263744744Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263750960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263757360Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263771336Z" level=info msg="runtime interface created" Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263774888Z" level=info msg="created NRI interface" Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263782152Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263789248Z" level=info msg="Connect containerd service" Jan 28 00:22:44.263895 containerd[2198]: time="2026-01-28T00:22:44.263802664Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:22:44.269238 containerd[2198]: time="2026-01-28T00:22:44.269217344Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:22:44.293298 sshd_keygen[2140]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:22:44.322109 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:22:44.328090 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:22:44.334134 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 28 00:22:44.351059 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:22:44.353991 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:22:44.362091 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 28 00:22:44.373070 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:22:44.402117 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:22:44.410743 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:22:44.418588 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 28 00:22:44.424561 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:22:44.456055 containerd[2198]: time="2026-01-28T00:22:44.456021696Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:22:44.456115 containerd[2198]: time="2026-01-28T00:22:44.456077120Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:22:44.456115 containerd[2198]: time="2026-01-28T00:22:44.456102528Z" level=info msg="Start subscribing containerd event" Jan 28 00:22:44.456157 containerd[2198]: time="2026-01-28T00:22:44.456137344Z" level=info msg="Start recovering state" Jan 28 00:22:44.459715 containerd[2198]: time="2026-01-28T00:22:44.456208184Z" level=info msg="Start event monitor" Jan 28 00:22:44.459715 containerd[2198]: time="2026-01-28T00:22:44.456221480Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:22:44.459715 containerd[2198]: time="2026-01-28T00:22:44.456229792Z" level=info msg="Start streaming server" Jan 28 00:22:44.459715 containerd[2198]: time="2026-01-28T00:22:44.456236096Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 00:22:44.459715 containerd[2198]: time="2026-01-28T00:22:44.456259008Z" level=info msg="runtime interface starting up..." Jan 28 00:22:44.459715 containerd[2198]: time="2026-01-28T00:22:44.456265584Z" level=info msg="starting plugins..." Jan 28 00:22:44.459715 containerd[2198]: time="2026-01-28T00:22:44.456275776Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 00:22:44.459715 containerd[2198]: time="2026-01-28T00:22:44.456360704Z" level=info msg="containerd successfully booted in 0.237165s" Jan 28 00:22:44.456472 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:22:44.541076 tar[2196]: linux-arm64/README.md Jan 28 00:22:44.554764 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:22:44.811207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:22:44.816281 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:22:44.819188 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:22:44.824408 systemd[1]: Startup finished in 1.658s (kernel) + 7.275s (initrd) + 5.992s (userspace) = 14.926s. Jan 28 00:22:44.918222 login[2288]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:22:44.925144 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:22:44.927100 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:22:44.932118 waagent[2278]: 2026-01-28T00:22:44.927526Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 28 00:22:44.932608 waagent[2278]: 2026-01-28T00:22:44.932568Z INFO Daemon Daemon OS: flatcar 4547.1.0 Jan 28 00:22:44.937071 waagent[2278]: 2026-01-28T00:22:44.937033Z INFO Daemon Daemon Python: 3.11.13 Jan 28 00:22:44.941108 systemd-logind[2162]: New session 1 of user core. Jan 28 00:22:44.941385 waagent[2278]: 2026-01-28T00:22:44.941346Z INFO Daemon Daemon Run daemon Jan 28 00:22:44.945020 waagent[2278]: 2026-01-28T00:22:44.944984Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.1.0' Jan 28 00:22:44.957000 waagent[2278]: 2026-01-28T00:22:44.956956Z INFO Daemon Daemon Using waagent for provisioning Jan 28 00:22:44.961680 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:22:44.962083 waagent[2278]: 2026-01-28T00:22:44.962049Z INFO Daemon Daemon Activate resource disk Jan 28 00:22:44.963911 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:22:44.966939 waagent[2278]: 2026-01-28T00:22:44.966904Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 28 00:22:44.975304 waagent[2278]: 2026-01-28T00:22:44.975271Z INFO Daemon Daemon Found device: None Jan 28 00:22:44.979264 waagent[2278]: 2026-01-28T00:22:44.979229Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 28 00:22:44.985745 waagent[2278]: 2026-01-28T00:22:44.985714Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 28 00:22:44.994781 waagent[2278]: 2026-01-28T00:22:44.994746Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 00:22:44.998635 (systemd)[2311]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:22:44.999462 waagent[2278]: 2026-01-28T00:22:44.999430Z INFO Daemon Daemon Running default provisioning handler Jan 28 00:22:45.006039 systemd-logind[2162]: New session 2 of user core. Jan 28 00:22:45.010640 waagent[2278]: 2026-01-28T00:22:45.010604Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 28 00:22:45.020962 waagent[2278]: 2026-01-28T00:22:45.020928Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 28 00:22:45.028087 waagent[2278]: 2026-01-28T00:22:45.028051Z INFO Daemon Daemon cloud-init is enabled: False Jan 28 00:22:45.032900 waagent[2278]: 2026-01-28T00:22:45.032862Z INFO Daemon Daemon Copying ovf-env.xml Jan 28 00:22:45.084090 waagent[2278]: 2026-01-28T00:22:45.080638Z INFO Daemon Daemon Successfully mounted dvd Jan 28 00:22:45.103433 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 28 00:22:45.107941 waagent[2278]: 2026-01-28T00:22:45.107907Z INFO Daemon Daemon Detect protocol endpoint Jan 28 00:22:45.112323 waagent[2278]: 2026-01-28T00:22:45.111912Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 00:22:45.117437 waagent[2278]: 2026-01-28T00:22:45.117401Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 28 00:22:45.123931 waagent[2278]: 2026-01-28T00:22:45.123299Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 28 00:22:45.128592 waagent[2278]: 2026-01-28T00:22:45.128554Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 28 00:22:45.131696 systemd[2311]: Queued start job for default target default.target. Jan 28 00:22:45.133677 waagent[2278]: 2026-01-28T00:22:45.133642Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 28 00:22:45.136087 systemd[2311]: Created slice app.slice - User Application Slice. Jan 28 00:22:45.136107 systemd[2311]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 00:22:45.136116 systemd[2311]: Reached target paths.target - Paths. Jan 28 00:22:45.136145 systemd[2311]: Reached target timers.target - Timers. Jan 28 00:22:45.136966 systemd[2311]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:22:45.139041 systemd[2311]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 00:22:45.151062 systemd[2311]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:22:45.151098 systemd[2311]: Reached target sockets.target - Sockets. Jan 28 00:22:45.152419 systemd[2311]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 00:22:45.152464 systemd[2311]: Reached target basic.target - Basic System. Jan 28 00:22:45.152500 systemd[2311]: Reached target default.target - Main User Target. Jan 28 00:22:45.152519 systemd[2311]: Startup finished in 141ms. Jan 28 00:22:45.153667 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:22:45.160094 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:22:45.220853 login[2289]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:22:45.225112 systemd-logind[2162]: New session 3 of user core. Jan 28 00:22:45.235072 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:22:45.336756 kubelet[2297]: E0128 00:22:45.336644 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:22:45.338876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:22:45.339069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:22:45.339647 systemd[1]: kubelet.service: Consumed 537ms CPU time, 257.2M memory peak. Jan 28 00:22:45.576855 waagent[2278]: 2026-01-28T00:22:45.575330Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 28 00:22:45.580787 waagent[2278]: 2026-01-28T00:22:45.580757Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 28 00:22:45.584732 waagent[2278]: 2026-01-28T00:22:45.584698Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 28 00:22:45.644892 waagent[2278]: 2026-01-28T00:22:45.644672Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 28 00:22:45.649490 waagent[2278]: 2026-01-28T00:22:45.649457Z INFO Daemon Daemon Forcing an update of the goal state. Jan 28 00:22:45.656326 waagent[2278]: 2026-01-28T00:22:45.656290Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 00:22:45.671225 waagent[2278]: 2026-01-28T00:22:45.671195Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 28 00:22:45.675809 waagent[2278]: 2026-01-28T00:22:45.675776Z INFO Daemon Jan 28 00:22:45.677879 waagent[2278]: 2026-01-28T00:22:45.677853Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f4640ce2-aa04-48ef-8a44-3654f9ee9ee5 eTag: 7014138462500925683 source: Fabric] Jan 28 00:22:45.686481 waagent[2278]: 2026-01-28T00:22:45.686449Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 28 00:22:45.691784 waagent[2278]: 2026-01-28T00:22:45.691754Z INFO Daemon Jan 28 00:22:45.693815 waagent[2278]: 2026-01-28T00:22:45.693790Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 28 00:22:45.701802 waagent[2278]: 2026-01-28T00:22:45.701774Z INFO Daemon Daemon Downloading artifacts profile blob Jan 28 00:22:45.757186 waagent[2278]: 2026-01-28T00:22:45.757135Z INFO Daemon Downloaded certificate {'thumbprint': 'B1A85017FFE3F7A1DE313840AE67A4C6CC9570D0', 'hasPrivateKey': True} Jan 28 00:22:45.764342 waagent[2278]: 2026-01-28T00:22:45.764308Z INFO Daemon Fetch goal state completed Jan 28 00:22:45.772802 waagent[2278]: 2026-01-28T00:22:45.772773Z INFO Daemon Daemon Starting provisioning Jan 28 00:22:45.776571 waagent[2278]: 2026-01-28T00:22:45.776531Z INFO Daemon Daemon Handle ovf-env.xml. Jan 28 00:22:45.779886 waagent[2278]: 2026-01-28T00:22:45.779853Z INFO Daemon Daemon Set hostname [ci-4547.1.0-n-77eb5aaac5] Jan 28 00:22:45.791699 waagent[2278]: 2026-01-28T00:22:45.791657Z INFO Daemon Daemon Publish hostname [ci-4547.1.0-n-77eb5aaac5] Jan 28 00:22:45.796159 waagent[2278]: 2026-01-28T00:22:45.796126Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 28 00:22:45.800696 waagent[2278]: 2026-01-28T00:22:45.800665Z INFO Daemon Daemon Primary interface is [eth0] Jan 28 00:22:45.809643 systemd-networkd[1903]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:22:45.809650 systemd-networkd[1903]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:22:45.809720 systemd-networkd[1903]: eth0: DHCP lease lost Jan 28 00:22:45.827834 waagent[2278]: 2026-01-28T00:22:45.827229Z INFO Daemon Daemon Create user account if not exists Jan 28 00:22:45.831565 waagent[2278]: 2026-01-28T00:22:45.831525Z INFO Daemon Daemon User core already exists, skip useradd Jan 28 00:22:45.835917 waagent[2278]: 2026-01-28T00:22:45.835876Z INFO Daemon Daemon Configure sudoer Jan 28 00:22:45.840874 systemd-networkd[1903]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:22:45.843178 waagent[2278]: 2026-01-28T00:22:45.843135Z INFO Daemon Daemon Configure sshd Jan 28 00:22:45.849499 waagent[2278]: 2026-01-28T00:22:45.849460Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 28 00:22:45.859500 waagent[2278]: 2026-01-28T00:22:45.859466Z INFO Daemon Daemon Deploy ssh public key. Jan 28 00:22:46.915270 waagent[2278]: 2026-01-28T00:22:46.915212Z INFO Daemon Daemon Provisioning complete Jan 28 00:22:46.928293 waagent[2278]: 2026-01-28T00:22:46.928258Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 28 00:22:46.932634 waagent[2278]: 2026-01-28T00:22:46.932604Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 28 00:22:46.939615 waagent[2278]: 2026-01-28T00:22:46.939589Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 28 00:22:47.034842 waagent[2368]: 2026-01-28T00:22:47.034656Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 28 00:22:47.034842 waagent[2368]: 2026-01-28T00:22:47.034745Z INFO ExtHandler ExtHandler OS: flatcar 4547.1.0 Jan 28 00:22:47.034842 waagent[2368]: 2026-01-28T00:22:47.034780Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 28 00:22:47.035843 waagent[2368]: 2026-01-28T00:22:47.034812Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 28 00:22:47.051217 waagent[2368]: 2026-01-28T00:22:47.051184Z INFO ExtHandler ExtHandler Distro: flatcar-4547.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 28 00:22:47.051405 waagent[2368]: 2026-01-28T00:22:47.051380Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:22:47.051512 waagent[2368]: 2026-01-28T00:22:47.051491Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:22:47.056500 waagent[2368]: 2026-01-28T00:22:47.056459Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 00:22:47.061067 waagent[2368]: 2026-01-28T00:22:47.061041Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 28 00:22:47.061473 waagent[2368]: 2026-01-28T00:22:47.061444Z INFO ExtHandler Jan 28 00:22:47.061589 waagent[2368]: 2026-01-28T00:22:47.061567Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0ef7cf6a-1feb-49d7-a57a-ba6c00aeb321 eTag: 7014138462500925683 source: Fabric] Jan 28 00:22:47.061930 waagent[2368]: 2026-01-28T00:22:47.061902Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 28 00:22:47.062410 waagent[2368]: 2026-01-28T00:22:47.062381Z INFO ExtHandler Jan 28 00:22:47.062530 waagent[2368]: 2026-01-28T00:22:47.062508Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 28 00:22:47.065797 waagent[2368]: 2026-01-28T00:22:47.065771Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 28 00:22:47.128842 waagent[2368]: 2026-01-28T00:22:47.128485Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B1A85017FFE3F7A1DE313840AE67A4C6CC9570D0', 'hasPrivateKey': True} Jan 28 00:22:47.128896 waagent[2368]: 2026-01-28T00:22:47.128852Z INFO ExtHandler Fetch goal state completed Jan 28 00:22:47.139527 waagent[2368]: 2026-01-28T00:22:47.139484Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 28 00:22:47.142621 waagent[2368]: 2026-01-28T00:22:47.142580Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2368 Jan 28 00:22:47.142712 waagent[2368]: 2026-01-28T00:22:47.142687Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 28 00:22:47.142969 waagent[2368]: 2026-01-28T00:22:47.142943Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 28 00:22:47.143985 waagent[2368]: 2026-01-28T00:22:47.143953Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 28 00:22:47.144291 waagent[2368]: 2026-01-28T00:22:47.144263Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 28 00:22:47.144394 waagent[2368]: 2026-01-28T00:22:47.144372Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 28 00:22:47.144795 waagent[2368]: 2026-01-28T00:22:47.144766Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 28 00:22:47.157035 waagent[2368]: 2026-01-28T00:22:47.157007Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 28 00:22:47.157153 waagent[2368]: 2026-01-28T00:22:47.157128Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 28 00:22:47.161968 waagent[2368]: 2026-01-28T00:22:47.161558Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 28 00:22:47.166056 systemd[1]: Reload requested from client PID 2383 ('systemctl') (unit waagent.service)... Jan 28 00:22:47.166275 systemd[1]: Reloading... Jan 28 00:22:47.234858 zram_generator::config[2431]: No configuration found. Jan 28 00:22:47.381650 systemd[1]: Reloading finished in 215 ms. Jan 28 00:22:47.414853 waagent[2368]: 2026-01-28T00:22:47.413500Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 28 00:22:47.414853 waagent[2368]: 2026-01-28T00:22:47.413632Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 28 00:22:47.485437 waagent[2368]: 2026-01-28T00:22:47.484760Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 28 00:22:47.485437 waagent[2368]: 2026-01-28T00:22:47.485022Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 28 00:22:47.485593 waagent[2368]: 2026-01-28T00:22:47.485550Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 28 00:22:47.485697 waagent[2368]: 2026-01-28T00:22:47.485661Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:22:47.485751 waagent[2368]: 2026-01-28T00:22:47.485730Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:22:47.485929 waagent[2368]: 2026-01-28T00:22:47.485899Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 28 00:22:47.486286 waagent[2368]: 2026-01-28T00:22:47.486250Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 28 00:22:47.486353 waagent[2368]: 2026-01-28T00:22:47.486323Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 28 00:22:47.486353 waagent[2368]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 28 00:22:47.486353 waagent[2368]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 28 00:22:47.486353 waagent[2368]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 28 00:22:47.486353 waagent[2368]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:22:47.486353 waagent[2368]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:22:47.486353 waagent[2368]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:22:47.486730 waagent[2368]: 2026-01-28T00:22:47.486697Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 28 00:22:47.486885 waagent[2368]: 2026-01-28T00:22:47.486844Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 28 00:22:47.486984 waagent[2368]: 2026-01-28T00:22:47.486956Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:22:47.487035 waagent[2368]: 2026-01-28T00:22:47.487016Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:22:47.487134 waagent[2368]: 2026-01-28T00:22:47.487106Z INFO EnvHandler ExtHandler Configure routes Jan 28 00:22:47.487173 waagent[2368]: 2026-01-28T00:22:47.487156Z INFO EnvHandler ExtHandler Gateway:None Jan 28 00:22:47.487202 waagent[2368]: 2026-01-28T00:22:47.487185Z INFO EnvHandler ExtHandler Routes:None Jan 28 00:22:47.487558 waagent[2368]: 2026-01-28T00:22:47.487520Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 28 00:22:47.487674 waagent[2368]: 2026-01-28T00:22:47.487594Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 28 00:22:47.487674 waagent[2368]: 2026-01-28T00:22:47.487623Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 28 00:22:47.494188 waagent[2368]: 2026-01-28T00:22:47.494151Z INFO ExtHandler ExtHandler Jan 28 00:22:47.494238 waagent[2368]: 2026-01-28T00:22:47.494209Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 78212dee-2349-47d8-a76b-e6d62868a154 correlation e617f149-8b8e-4965-906e-7086ff53a5af created: 2026-01-28T00:22:15.571648Z] Jan 28 00:22:47.494463 waagent[2368]: 2026-01-28T00:22:47.494432Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 28 00:22:47.494855 waagent[2368]: 2026-01-28T00:22:47.494808Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 28 00:22:47.505794 waagent[2368]: 2026-01-28T00:22:47.505753Z INFO MonitorHandler ExtHandler Network interfaces: Jan 28 00:22:47.505794 waagent[2368]: Executing ['ip', '-a', '-o', 'link']: Jan 28 00:22:47.505794 waagent[2368]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 28 00:22:47.505794 waagent[2368]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:89:b2:4c brd ff:ff:ff:ff:ff:ff\ altname enx7ced8d89b24c Jan 28 00:22:47.505794 waagent[2368]: 3: enP34556s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:89:b2:4c brd ff:ff:ff:ff:ff:ff\ altname enP34556p0s2 Jan 28 00:22:47.505794 waagent[2368]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 28 00:22:47.505794 waagent[2368]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 28 00:22:47.505794 waagent[2368]: 2: eth0 inet 10.200.20.33/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 28 00:22:47.505794 waagent[2368]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 28 00:22:47.505794 waagent[2368]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 28 00:22:47.505794 waagent[2368]: 2: eth0 inet6 fe80::7eed:8dff:fe89:b24c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 28 00:22:47.515034 waagent[2368]: 2026-01-28T00:22:47.514992Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 28 00:22:47.515034 waagent[2368]: Try `iptables -h' or 'iptables --help' for more information.) Jan 28 00:22:47.515366 waagent[2368]: 2026-01-28T00:22:47.515284Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 560B2FA8-7A01-4965-A7DD-16926418ECEA;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 28 00:22:47.531867 waagent[2368]: 2026-01-28T00:22:47.531835Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 28 00:22:47.531867 waagent[2368]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:22:47.531867 waagent[2368]: pkts bytes target prot opt in out source destination Jan 28 00:22:47.531867 waagent[2368]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:22:47.531867 waagent[2368]: pkts bytes target prot opt in out source destination Jan 28 00:22:47.531867 waagent[2368]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:22:47.531867 waagent[2368]: pkts bytes target prot opt in out source destination Jan 28 00:22:47.531867 waagent[2368]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 00:22:47.531867 waagent[2368]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 00:22:47.531867 waagent[2368]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 00:22:47.534403 waagent[2368]: 2026-01-28T00:22:47.534217Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 28 00:22:47.534403 waagent[2368]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:22:47.534403 waagent[2368]: pkts bytes target prot opt in out source destination Jan 28 00:22:47.534403 waagent[2368]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:22:47.534403 waagent[2368]: pkts bytes target prot opt in out source destination Jan 28 00:22:47.534403 waagent[2368]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:22:47.534403 waagent[2368]: pkts bytes target prot opt in out source destination Jan 28 00:22:47.534403 waagent[2368]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 00:22:47.534403 waagent[2368]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 00:22:47.534403 waagent[2368]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 00:22:47.534856 waagent[2368]: 2026-01-28T00:22:47.534808Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 28 00:22:55.565395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:22:55.567045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:22:55.669530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:22:55.675016 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:22:55.782993 kubelet[2521]: E0128 00:22:55.782947 2521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:22:55.785514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:22:55.785708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:22:55.786080 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107.1M memory peak. Jan 28 00:23:01.639910 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:23:01.641103 systemd[1]: Started sshd@0-10.200.20.33:22-10.200.16.10:52478.service - OpenSSH per-connection server daemon (10.200.16.10:52478). Jan 28 00:23:02.113852 sshd[2529]: Accepted publickey for core from 10.200.16.10 port 52478 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:23:02.114640 sshd-session[2529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:23:02.118793 systemd-logind[2162]: New session 4 of user core. Jan 28 00:23:02.121938 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:23:02.423303 systemd[1]: Started sshd@1-10.200.20.33:22-10.200.16.10:52490.service - OpenSSH per-connection server daemon (10.200.16.10:52490). Jan 28 00:23:02.822044 sshd[2536]: Accepted publickey for core from 10.200.16.10 port 52490 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:23:02.823132 sshd-session[2536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:23:02.826624 systemd-logind[2162]: New session 5 of user core. Jan 28 00:23:02.833957 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:23:03.037509 sshd[2540]: Connection closed by 10.200.16.10 port 52490 Jan 28 00:23:03.038030 sshd-session[2536]: pam_unix(sshd:session): session closed for user core Jan 28 00:23:03.041045 systemd-logind[2162]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:23:03.041615 systemd[1]: sshd@1-10.200.20.33:22-10.200.16.10:52490.service: Deactivated successfully. Jan 28 00:23:03.043057 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:23:03.044238 systemd-logind[2162]: Removed session 5. Jan 28 00:23:03.117162 systemd[1]: Started sshd@2-10.200.20.33:22-10.200.16.10:52492.service - OpenSSH per-connection server daemon (10.200.16.10:52492). Jan 28 00:23:03.502784 sshd[2546]: Accepted publickey for core from 10.200.16.10 port 52492 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:23:03.503973 sshd-session[2546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:23:03.507357 systemd-logind[2162]: New session 6 of user core. Jan 28 00:23:03.516937 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:23:03.713722 sshd[2550]: Connection closed by 10.200.16.10 port 52492 Jan 28 00:23:03.713646 sshd-session[2546]: pam_unix(sshd:session): session closed for user core Jan 28 00:23:03.717631 systemd-logind[2162]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:23:03.717767 systemd[1]: sshd@2-10.200.20.33:22-10.200.16.10:52492.service: Deactivated successfully. Jan 28 00:23:03.719005 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:23:03.720099 systemd-logind[2162]: Removed session 6. Jan 28 00:23:03.792968 systemd[1]: Started sshd@3-10.200.20.33:22-10.200.16.10:52506.service - OpenSSH per-connection server daemon (10.200.16.10:52506). Jan 28 00:23:04.180852 sshd[2556]: Accepted publickey for core from 10.200.16.10 port 52506 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:23:04.181619 sshd-session[2556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:23:04.185880 systemd-logind[2162]: New session 7 of user core. Jan 28 00:23:04.191954 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:23:04.394626 sshd[2560]: Connection closed by 10.200.16.10 port 52506 Jan 28 00:23:04.395089 sshd-session[2556]: pam_unix(sshd:session): session closed for user core Jan 28 00:23:04.398182 systemd[1]: sshd@3-10.200.20.33:22-10.200.16.10:52506.service: Deactivated successfully. Jan 28 00:23:04.401970 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:23:04.403087 systemd-logind[2162]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:23:04.404264 systemd-logind[2162]: Removed session 7. Jan 28 00:23:04.482777 systemd[1]: Started sshd@4-10.200.20.33:22-10.200.16.10:52518.service - OpenSSH per-connection server daemon (10.200.16.10:52518). Jan 28 00:23:04.903225 sshd[2566]: Accepted publickey for core from 10.200.16.10 port 52518 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:23:04.904283 sshd-session[2566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:23:04.907690 systemd-logind[2162]: New session 8 of user core. Jan 28 00:23:04.918956 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:23:05.091633 sudo[2571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:23:05.091849 sudo[2571]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:23:05.103403 sudo[2571]: pam_unix(sudo:session): session closed for user root Jan 28 00:23:05.181620 sshd[2570]: Connection closed by 10.200.16.10 port 52518 Jan 28 00:23:05.180926 sshd-session[2566]: pam_unix(sshd:session): session closed for user core Jan 28 00:23:05.184293 systemd-logind[2162]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:23:05.184546 systemd[1]: sshd@4-10.200.20.33:22-10.200.16.10:52518.service: Deactivated successfully. Jan 28 00:23:05.186364 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:23:05.188301 systemd-logind[2162]: Removed session 8. Jan 28 00:23:05.264325 systemd[1]: Started sshd@5-10.200.20.33:22-10.200.16.10:52530.service - OpenSSH per-connection server daemon (10.200.16.10:52530). Jan 28 00:23:05.659644 sshd[2578]: Accepted publickey for core from 10.200.16.10 port 52530 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:23:05.660679 sshd-session[2578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:23:05.664310 systemd-logind[2162]: New session 9 of user core. Jan 28 00:23:05.671081 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:23:05.804623 sudo[2584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:23:05.804835 sudo[2584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:23:05.809537 sudo[2584]: pam_unix(sudo:session): session closed for user root Jan 28 00:23:05.813748 sudo[2583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 00:23:05.813955 sudo[2583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:23:05.815268 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:23:05.818035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:23:05.824029 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:23:05.860000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 00:23:05.864090 kernel: kauditd_printk_skb: 133 callbacks suppressed Jan 28 00:23:05.864140 kernel: audit: type=1305 audit(1769559785.860:238): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 00:23:05.864978 augenrules[2611]: No rules Jan 28 00:23:05.873004 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:23:05.873224 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:23:05.860000 audit[2611]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd1ccff70 a2=420 a3=0 items=0 ppid=2590 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:05.893865 kernel: audit: type=1300 audit(1769559785.860:238): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd1ccff70 a2=420 a3=0 items=0 ppid=2590 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:05.860000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 00:23:05.895137 sudo[2583]: pam_unix(sudo:session): session closed for user root Jan 28 00:23:05.904121 kernel: audit: type=1327 audit(1769559785.860:238): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 00:23:05.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.917803 kernel: audit: type=1130 audit(1769559785.872:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.931454 kernel: audit: type=1131 audit(1769559785.872:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.931490 kernel: audit: type=1106 audit(1769559785.896:241): pid=2583 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.896000 audit[2583]: USER_END pid=2583 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.896000 audit[2583]: CRED_DISP pid=2583 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.959261 kernel: audit: type=1104 audit(1769559785.896:242): pid=2583 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.965470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:23:05.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:05.979439 sshd[2582]: Connection closed by 10.200.16.10 port 52530 Jan 28 00:23:05.979344 sshd-session[2578]: pam_unix(sshd:session): session closed for user core Jan 28 00:23:05.979000 audit[2578]: USER_END pid=2578 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:06.000384 kernel: audit: type=1130 audit(1769559785.964:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:06.000432 kernel: audit: type=1106 audit(1769559785.979:244): pid=2578 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:06.001181 systemd[1]: sshd@5-10.200.20.33:22-10.200.16.10:52530.service: Deactivated successfully. Jan 28 00:23:05.979000 audit[2578]: CRED_DISP pid=2578 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:06.002071 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:23:06.015897 kernel: audit: type=1104 audit(1769559785.979:245): pid=2578 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:06.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.33:22-10.200.16.10:52530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:06.017511 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 00:23:06.018587 systemd-logind[2162]: Session 9 logged out. Waiting for processes to exit. Jan 28 00:23:06.019675 systemd-logind[2162]: Removed session 9. Jan 28 00:23:06.044379 kubelet[2621]: E0128 00:23:06.044333 2621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:23:06.046129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:23:06.046225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:23:06.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 00:23:06.046505 systemd[1]: kubelet.service: Consumed 102ms CPU time, 105.3M memory peak. Jan 28 00:23:06.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.33:22-10.200.16.10:52540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:06.056135 systemd[1]: Started sshd@6-10.200.20.33:22-10.200.16.10:52540.service - OpenSSH per-connection server daemon (10.200.16.10:52540). Jan 28 00:23:06.440000 audit[2632]: USER_ACCT pid=2632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:06.441943 sshd[2632]: Accepted publickey for core from 10.200.16.10 port 52540 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:23:06.442000 audit[2632]: CRED_ACQ pid=2632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:06.442000 audit[2632]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd26be740 a2=3 a3=0 items=0 ppid=1 pid=2632 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:06.442000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:23:06.443241 sshd-session[2632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:23:06.446964 systemd-logind[2162]: New session 10 of user core. Jan 28 00:23:06.454940 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 00:23:06.457000 audit[2632]: USER_START pid=2632 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:06.458000 audit[2636]: CRED_ACQ pid=2636 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:06.587570 sudo[2637]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:23:06.587773 sudo[2637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:23:06.586000 audit[2637]: USER_ACCT pid=2637 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:06.586000 audit[2637]: CRED_REFR pid=2637 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:06.586000 audit[2637]: USER_START pid=2637 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:06.875010 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:23:06.879025 (dockerd)[2656]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:23:07.136934 dockerd[2656]: time="2026-01-28T00:23:07.135915024Z" level=info msg="Starting up" Jan 28 00:23:07.136934 dockerd[2656]: time="2026-01-28T00:23:07.136379736Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 00:23:07.144275 dockerd[2656]: time="2026-01-28T00:23:07.144233688Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 00:23:07.181007 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2983974335-merged.mount: Deactivated successfully. Jan 28 00:23:07.257612 dockerd[2656]: time="2026-01-28T00:23:07.257481728Z" level=info msg="Loading containers: start." Jan 28 00:23:07.277843 kernel: Initializing XFRM netlink socket Jan 28 00:23:07.303000 audit[2703]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.303000 audit[2703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd807bbe0 a2=0 a3=0 items=0 ppid=2656 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 00:23:07.304000 audit[2705]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.304000 audit[2705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffdc0433d0 a2=0 a3=0 items=0 ppid=2656 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.304000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 00:23:07.306000 audit[2707]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.306000 audit[2707]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc800b8d0 a2=0 a3=0 items=0 ppid=2656 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.306000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 00:23:07.307000 audit[2709]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2709 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.307000 audit[2709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb17ab90 a2=0 a3=0 items=0 ppid=2656 pid=2709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.307000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 00:23:07.309000 audit[2711]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.309000 audit[2711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4538dc0 a2=0 a3=0 items=0 ppid=2656 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 00:23:07.310000 audit[2713]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.310000 audit[2713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff3579340 a2=0 a3=0 items=0 ppid=2656 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.310000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 00:23:07.311000 audit[2715]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2715 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.311000 audit[2715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffff432ff0 a2=0 a3=0 items=0 ppid=2656 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 00:23:07.313000 audit[2717]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.313000 audit[2717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff6093d50 a2=0 a3=0 items=0 ppid=2656 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.313000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 00:23:07.328000 audit[2720]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.328000 audit[2720]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffff2e3f200 a2=0 a3=0 items=0 ppid=2656 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.328000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 00:23:07.330000 audit[2722]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.330000 audit[2722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffeab700c0 a2=0 a3=0 items=0 ppid=2656 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.330000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 00:23:07.331000 audit[2724]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.331000 audit[2724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffda057850 a2=0 a3=0 items=0 ppid=2656 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 00:23:07.333000 audit[2726]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2726 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.333000 audit[2726]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffce7239f0 a2=0 a3=0 items=0 ppid=2656 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 00:23:07.334000 audit[2728]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2728 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.334000 audit[2728]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff806d350 a2=0 a3=0 items=0 ppid=2656 pid=2728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.334000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 00:23:07.374000 audit[2758]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2758 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.374000 audit[2758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffeb4666a0 a2=0 a3=0 items=0 ppid=2656 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 00:23:07.376000 audit[2760]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.376000 audit[2760]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe973d440 a2=0 a3=0 items=0 ppid=2656 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.376000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 00:23:07.377000 audit[2762]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.377000 audit[2762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdebd6230 a2=0 a3=0 items=0 ppid=2656 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.377000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 00:23:07.379000 audit[2764]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.379000 audit[2764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3d0b840 a2=0 a3=0 items=0 ppid=2656 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 00:23:07.381000 audit[2766]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.381000 audit[2766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffebc47680 a2=0 a3=0 items=0 ppid=2656 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.381000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 00:23:07.382000 audit[2768]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.382000 audit[2768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff8b4f9c0 a2=0 a3=0 items=0 ppid=2656 pid=2768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.382000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 00:23:07.384000 audit[2770]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2770 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.384000 audit[2770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffddab2860 a2=0 a3=0 items=0 ppid=2656 pid=2770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.384000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 00:23:07.385000 audit[2772]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2772 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.385000 audit[2772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffdd53080 a2=0 a3=0 items=0 ppid=2656 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.385000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 00:23:07.388000 audit[2774]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2774 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.388000 audit[2774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff1b0a590 a2=0 a3=0 items=0 ppid=2656 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 00:23:07.389000 audit[2776]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.389000 audit[2776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc7210790 a2=0 a3=0 items=0 ppid=2656 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.389000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 00:23:07.391000 audit[2778]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2778 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.391000 audit[2778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffcbf9c0b0 a2=0 a3=0 items=0 ppid=2656 pid=2778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 00:23:07.392000 audit[2780]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2780 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.392000 audit[2780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe552e390 a2=0 a3=0 items=0 ppid=2656 pid=2780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.392000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 00:23:07.394000 audit[2782]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2782 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.394000 audit[2782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdceaac90 a2=0 a3=0 items=0 ppid=2656 pid=2782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.394000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 00:23:07.397000 audit[2787]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.397000 audit[2787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe3638410 a2=0 a3=0 items=0 ppid=2656 pid=2787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.397000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 00:23:07.399000 audit[2789]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2789 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.399000 audit[2789]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffffbb706c0 a2=0 a3=0 items=0 ppid=2656 pid=2789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.399000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 00:23:07.400000 audit[2791]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.400000 audit[2791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff331fdc0 a2=0 a3=0 items=0 ppid=2656 pid=2791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 00:23:07.402000 audit[2793]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2793 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.402000 audit[2793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd17475d0 a2=0 a3=0 items=0 ppid=2656 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.402000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 00:23:07.403000 audit[2795]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.403000 audit[2795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd3d251b0 a2=0 a3=0 items=0 ppid=2656 pid=2795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.403000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 00:23:07.405000 audit[2797]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2797 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:07.405000 audit[2797]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe564a420 a2=0 a3=0 items=0 ppid=2656 pid=2797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.405000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 00:23:07.442000 audit[2802]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.442000 audit[2802]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc2cbc1b0 a2=0 a3=0 items=0 ppid=2656 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 00:23:07.444000 audit[2804]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2804 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.444000 audit[2804]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe1215e50 a2=0 a3=0 items=0 ppid=2656 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.444000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 00:23:07.450000 audit[2812]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2812 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.450000 audit[2812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc01351d0 a2=0 a3=0 items=0 ppid=2656 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.450000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 00:23:07.454000 audit[2817]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.454000 audit[2817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffb3880d0 a2=0 a3=0 items=0 ppid=2656 pid=2817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.454000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 00:23:07.455000 audit[2819]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.455000 audit[2819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffee7a3a10 a2=0 a3=0 items=0 ppid=2656 pid=2819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.455000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 00:23:07.457000 audit[2821]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.457000 audit[2821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe5143df0 a2=0 a3=0 items=0 ppid=2656 pid=2821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.457000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 00:23:07.459000 audit[2823]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.459000 audit[2823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd40ed150 a2=0 a3=0 items=0 ppid=2656 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.459000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 00:23:07.460000 audit[2825]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:07.460000 audit[2825]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd18c2170 a2=0 a3=0 items=0 ppid=2656 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:07.460000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 00:23:07.461878 systemd-networkd[1903]: docker0: Link UP Jan 28 00:23:07.478172 dockerd[2656]: time="2026-01-28T00:23:07.478141464Z" level=info msg="Loading containers: done." Jan 28 00:23:07.488785 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2555892121-merged.mount: Deactivated successfully. Jan 28 00:23:07.536648 dockerd[2656]: time="2026-01-28T00:23:07.536610480Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:23:07.536773 dockerd[2656]: time="2026-01-28T00:23:07.536682072Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 00:23:07.536773 dockerd[2656]: time="2026-01-28T00:23:07.536767488Z" level=info msg="Initializing buildkit" Jan 28 00:23:07.551868 chronyd[2091]: Selected source PHC0 Jan 28 00:23:07.589162 dockerd[2656]: time="2026-01-28T00:23:07.589129401Z" level=info msg="Completed buildkit initialization" Jan 28 00:23:07.593403 dockerd[2656]: time="2026-01-28T00:23:07.593365763Z" level=info msg="Daemon has completed initialization" Jan 28 00:23:07.593851 dockerd[2656]: time="2026-01-28T00:23:07.593638091Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:23:07.594180 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:23:07.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:08.300866 containerd[2198]: time="2026-01-28T00:23:08.300780142Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 00:23:09.107667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207993610.mount: Deactivated successfully. Jan 28 00:23:10.376839 containerd[2198]: time="2026-01-28T00:23:10.376775307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:10.380547 containerd[2198]: time="2026-01-28T00:23:10.380368923Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24930699" Jan 28 00:23:10.383722 containerd[2198]: time="2026-01-28T00:23:10.383697499Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:10.388084 containerd[2198]: time="2026-01-28T00:23:10.388062859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:10.388581 containerd[2198]: time="2026-01-28T00:23:10.388561107Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.087610136s" Jan 28 00:23:10.388756 containerd[2198]: time="2026-01-28T00:23:10.388650899Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 28 00:23:10.389280 containerd[2198]: time="2026-01-28T00:23:10.389255659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 00:23:12.217469 containerd[2198]: time="2026-01-28T00:23:12.216883707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:12.219698 containerd[2198]: time="2026-01-28T00:23:12.219676251Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 28 00:23:12.222871 containerd[2198]: time="2026-01-28T00:23:12.222851219Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:12.227592 containerd[2198]: time="2026-01-28T00:23:12.227564203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:12.228212 containerd[2198]: time="2026-01-28T00:23:12.228037011Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.83875488s" Jan 28 00:23:12.228212 containerd[2198]: time="2026-01-28T00:23:12.228139899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 28 00:23:12.228958 containerd[2198]: time="2026-01-28T00:23:12.228913779Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 00:23:13.504842 containerd[2198]: time="2026-01-28T00:23:13.504259547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:13.507074 containerd[2198]: time="2026-01-28T00:23:13.507054235Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 28 00:23:13.510510 containerd[2198]: time="2026-01-28T00:23:13.510492395Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:13.515417 containerd[2198]: time="2026-01-28T00:23:13.515387307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:13.515977 containerd[2198]: time="2026-01-28T00:23:13.515955827Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.287021984s" Jan 28 00:23:13.516054 containerd[2198]: time="2026-01-28T00:23:13.516042587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 28 00:23:13.516471 containerd[2198]: time="2026-01-28T00:23:13.516449235Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 00:23:14.560806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910405486.mount: Deactivated successfully. Jan 28 00:23:14.949736 containerd[2198]: time="2026-01-28T00:23:14.949683683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:14.953209 containerd[2198]: time="2026-01-28T00:23:14.953170995Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=17713718" Jan 28 00:23:14.956539 containerd[2198]: time="2026-01-28T00:23:14.956506171Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:14.960851 containerd[2198]: time="2026-01-28T00:23:14.960805843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:14.961610 containerd[2198]: time="2026-01-28T00:23:14.961475667Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.445002576s" Jan 28 00:23:14.961610 containerd[2198]: time="2026-01-28T00:23:14.961500755Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 28 00:23:14.962161 containerd[2198]: time="2026-01-28T00:23:14.962006843Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 00:23:15.789392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981604678.mount: Deactivated successfully. Jan 28 00:23:16.065346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 00:23:16.066629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:23:16.356252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:23:16.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:16.360156 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 28 00:23:16.360221 kernel: audit: type=1130 audit(1769559796.355:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:16.381018 (kubelet)[2959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:23:16.534840 kubelet[2959]: E0128 00:23:16.534755 2959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:23:16.934610 kernel: audit: type=1131 audit(1769559796.536:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 00:23:16.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 00:23:16.536788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:23:16.536912 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:23:16.537400 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.9M memory peak. Jan 28 00:23:17.427760 containerd[2198]: time="2026-01-28T00:23:17.427426170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:17.430331 containerd[2198]: time="2026-01-28T00:23:17.430298438Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16183484" Jan 28 00:23:17.433504 containerd[2198]: time="2026-01-28T00:23:17.433465474Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:17.438852 containerd[2198]: time="2026-01-28T00:23:17.438254777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:17.438900 containerd[2198]: time="2026-01-28T00:23:17.438864233Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.476804406s" Jan 28 00:23:17.438900 containerd[2198]: time="2026-01-28T00:23:17.438889370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 28 00:23:17.439260 containerd[2198]: time="2026-01-28T00:23:17.439238187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 00:23:17.973550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700277936.mount: Deactivated successfully. Jan 28 00:23:17.993970 containerd[2198]: time="2026-01-28T00:23:17.993930244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:23:17.999571 containerd[2198]: time="2026-01-28T00:23:17.999419278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 00:23:18.003464 containerd[2198]: time="2026-01-28T00:23:18.003441041Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:23:18.007925 containerd[2198]: time="2026-01-28T00:23:18.007421955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:23:18.007925 containerd[2198]: time="2026-01-28T00:23:18.007734419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 568.474423ms" Jan 28 00:23:18.007925 containerd[2198]: time="2026-01-28T00:23:18.007757916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 28 00:23:18.008303 containerd[2198]: time="2026-01-28T00:23:18.008281985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 00:23:18.652573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount418219217.mount: Deactivated successfully. Jan 28 00:23:21.898081 containerd[2198]: time="2026-01-28T00:23:21.898031496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:21.902672 containerd[2198]: time="2026-01-28T00:23:21.902483767Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Jan 28 00:23:21.906309 containerd[2198]: time="2026-01-28T00:23:21.906284396Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:21.911787 containerd[2198]: time="2026-01-28T00:23:21.911759061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:21.912423 containerd[2198]: time="2026-01-28T00:23:21.912398702Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.904093852s" Jan 28 00:23:21.912445 containerd[2198]: time="2026-01-28T00:23:21.912425791Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 28 00:23:24.713542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:23:24.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:24.714000 systemd[1]: kubelet.service: Consumed 105ms CPU time, 106.9M memory peak. Jan 28 00:23:24.718007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:23:24.726840 kernel: audit: type=1130 audit(1769559804.713:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:24.726920 kernel: audit: type=1131 audit(1769559804.713:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:24.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:24.757375 systemd[1]: Reload requested from client PID 3089 ('systemctl') (unit session-10.scope)... Jan 28 00:23:24.757387 systemd[1]: Reloading... Jan 28 00:23:24.849833 zram_generator::config[3139]: No configuration found. Jan 28 00:23:25.004258 systemd[1]: Reloading finished in 246 ms. Jan 28 00:23:25.031000 audit: BPF prog-id=86 op=LOAD Jan 28 00:23:25.031000 audit: BPF prog-id=72 op=UNLOAD Jan 28 00:23:25.041613 kernel: audit: type=1334 audit(1769559805.031:302): prog-id=86 op=LOAD Jan 28 00:23:25.041657 kernel: audit: type=1334 audit(1769559805.031:303): prog-id=72 op=UNLOAD Jan 28 00:23:25.042000 audit: BPF prog-id=87 op=LOAD Jan 28 00:23:25.042000 audit: BPF prog-id=73 op=UNLOAD Jan 28 00:23:25.051507 kernel: audit: type=1334 audit(1769559805.042:304): prog-id=87 op=LOAD Jan 28 00:23:25.051535 kernel: audit: type=1334 audit(1769559805.042:305): prog-id=73 op=UNLOAD Jan 28 00:23:25.046000 audit: BPF prog-id=88 op=LOAD Jan 28 00:23:25.056735 kernel: audit: type=1334 audit(1769559805.046:306): prog-id=88 op=LOAD Jan 28 00:23:25.046000 audit: BPF prog-id=83 op=UNLOAD Jan 28 00:23:25.061156 kernel: audit: type=1334 audit(1769559805.046:307): prog-id=83 op=UNLOAD Jan 28 00:23:25.046000 audit: BPF prog-id=89 op=LOAD Jan 28 00:23:25.065239 kernel: audit: type=1334 audit(1769559805.046:308): prog-id=89 op=LOAD Jan 28 00:23:25.050000 audit: BPF prog-id=90 op=LOAD Jan 28 00:23:25.070399 kernel: audit: type=1334 audit(1769559805.050:309): prog-id=90 op=LOAD Jan 28 00:23:25.050000 audit: BPF prog-id=84 op=UNLOAD Jan 28 00:23:25.050000 audit: BPF prog-id=85 op=UNLOAD Jan 28 00:23:25.056000 audit: BPF prog-id=91 op=LOAD Jan 28 00:23:25.056000 audit: BPF prog-id=69 op=UNLOAD Jan 28 00:23:25.060000 audit: BPF prog-id=92 op=LOAD Jan 28 00:23:25.064000 audit: BPF prog-id=93 op=LOAD Jan 28 00:23:25.064000 audit: BPF prog-id=70 op=UNLOAD Jan 28 00:23:25.064000 audit: BPF prog-id=71 op=UNLOAD Jan 28 00:23:25.069000 audit: BPF prog-id=94 op=LOAD Jan 28 00:23:25.070000 audit: BPF prog-id=77 op=UNLOAD Jan 28 00:23:25.070000 audit: BPF prog-id=95 op=LOAD Jan 28 00:23:25.070000 audit: BPF prog-id=96 op=LOAD Jan 28 00:23:25.070000 audit: BPF prog-id=78 op=UNLOAD Jan 28 00:23:25.070000 audit: BPF prog-id=79 op=UNLOAD Jan 28 00:23:25.070000 audit: BPF prog-id=97 op=LOAD Jan 28 00:23:25.070000 audit: BPF prog-id=74 op=UNLOAD Jan 28 00:23:25.071000 audit: BPF prog-id=98 op=LOAD Jan 28 00:23:25.072000 audit: BPF prog-id=99 op=LOAD Jan 28 00:23:25.072000 audit: BPF prog-id=75 op=UNLOAD Jan 28 00:23:25.072000 audit: BPF prog-id=76 op=UNLOAD Jan 28 00:23:25.072000 audit: BPF prog-id=100 op=LOAD Jan 28 00:23:25.072000 audit: BPF prog-id=66 op=UNLOAD Jan 28 00:23:25.072000 audit: BPF prog-id=101 op=LOAD Jan 28 00:23:25.073000 audit: BPF prog-id=102 op=LOAD Jan 28 00:23:25.073000 audit: BPF prog-id=67 op=UNLOAD Jan 28 00:23:25.073000 audit: BPF prog-id=68 op=UNLOAD Jan 28 00:23:25.073000 audit: BPF prog-id=103 op=LOAD Jan 28 00:23:25.073000 audit: BPF prog-id=80 op=UNLOAD Jan 28 00:23:25.073000 audit: BPF prog-id=104 op=LOAD Jan 28 00:23:25.073000 audit: BPF prog-id=105 op=LOAD Jan 28 00:23:25.073000 audit: BPF prog-id=81 op=UNLOAD Jan 28 00:23:25.073000 audit: BPF prog-id=82 op=UNLOAD Jan 28 00:23:25.086196 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 00:23:25.086261 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 00:23:25.086583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:23:25.086630 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.2M memory peak. Jan 28 00:23:25.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 00:23:25.088391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:23:25.252526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:23:25.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:25.258994 (kubelet)[3206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:23:25.378377 kubelet[3206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:23:25.378948 kubelet[3206]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:23:25.378948 kubelet[3206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:23:25.378948 kubelet[3206]: I0128 00:23:25.378712 3206 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:23:25.792839 kubelet[3206]: I0128 00:23:25.792664 3206 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:23:25.792839 kubelet[3206]: I0128 00:23:25.792695 3206 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:23:25.793190 kubelet[3206]: I0128 00:23:25.793173 3206 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:23:25.813881 kubelet[3206]: E0128 00:23:25.813854 3206 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:23:25.817408 kubelet[3206]: I0128 00:23:25.817270 3206 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:23:25.821702 kubelet[3206]: I0128 00:23:25.821679 3206 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:23:25.824047 kubelet[3206]: I0128 00:23:25.824027 3206 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:23:25.824727 kubelet[3206]: I0128 00:23:25.824694 3206 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:23:25.824864 kubelet[3206]: I0128 00:23:25.824725 3206 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-n-77eb5aaac5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:23:25.824950 kubelet[3206]: I0128 00:23:25.824871 3206 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:23:25.824950 kubelet[3206]: I0128 00:23:25.824879 3206 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:23:25.824984 kubelet[3206]: I0128 00:23:25.824972 3206 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:23:25.827444 kubelet[3206]: I0128 00:23:25.827424 3206 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:23:25.827444 kubelet[3206]: I0128 00:23:25.827443 3206 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:23:25.827501 kubelet[3206]: I0128 00:23:25.827460 3206 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:23:25.827545 kubelet[3206]: I0128 00:23:25.827531 3206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:23:25.832524 kubelet[3206]: W0128 00:23:25.832160 3206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-n-77eb5aaac5&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 28 00:23:25.832524 kubelet[3206]: E0128 00:23:25.832209 3206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.1.0-n-77eb5aaac5&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:23:25.832524 kubelet[3206]: W0128 00:23:25.832463 3206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 28 00:23:25.832524 kubelet[3206]: E0128 00:23:25.832489 3206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:23:25.833055 kubelet[3206]: I0128 00:23:25.833041 3206 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 00:23:25.834273 kubelet[3206]: I0128 00:23:25.834246 3206 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:23:25.834337 kubelet[3206]: W0128 00:23:25.834300 3206 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:23:25.834993 kubelet[3206]: I0128 00:23:25.834975 3206 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:23:25.835052 kubelet[3206]: I0128 00:23:25.835002 3206 server.go:1287] "Started kubelet" Jan 28 00:23:25.836838 kubelet[3206]: I0128 00:23:25.836089 3206 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:23:25.837158 kubelet[3206]: I0128 00:23:25.837144 3206 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:23:25.838157 kubelet[3206]: I0128 00:23:25.838105 3206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:23:25.838356 kubelet[3206]: I0128 00:23:25.838338 3206 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:23:25.839485 kubelet[3206]: I0128 00:23:25.839464 3206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:23:25.840729 kubelet[3206]: I0128 00:23:25.840711 3206 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:23:25.841845 kubelet[3206]: I0128 00:23:25.841774 3206 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:23:25.842237 kubelet[3206]: E0128 00:23:25.842206 3206 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.1.0-n-77eb5aaac5\" not found" Jan 28 00:23:25.842560 kubelet[3206]: I0128 00:23:25.842539 3206 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:23:25.842610 kubelet[3206]: I0128 00:23:25.842587 3206 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:23:25.843439 kubelet[3206]: W0128 00:23:25.843402 3206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 28 00:23:25.843439 kubelet[3206]: E0128 00:23:25.843439 3206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:23:25.843519 kubelet[3206]: E0128 00:23:25.843481 3206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-n-77eb5aaac5?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="200ms" Jan 28 00:23:25.843636 kubelet[3206]: I0128 00:23:25.843616 3206 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:23:25.843698 kubelet[3206]: I0128 00:23:25.843684 3206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:23:25.843000 audit[3218]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:25.843000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffdde52f0 a2=0 a3=0 items=0 ppid=3206 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 00:23:25.844693 kubelet[3206]: E0128 00:23:25.844599 3206 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.1.0-n-77eb5aaac5.188ebd44939a8df9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.1.0-n-77eb5aaac5,UID:ci-4547.1.0-n-77eb5aaac5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.1.0-n-77eb5aaac5,},FirstTimestamp:2026-01-28 00:23:25.834989049 +0000 UTC m=+0.572882445,LastTimestamp:2026-01-28 00:23:25.834989049 +0000 UTC m=+0.572882445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.1.0-n-77eb5aaac5,}" Jan 28 00:23:25.844000 audit[3219]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:25.844000 audit[3219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef661180 a2=0 a3=0 items=0 ppid=3206 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 00:23:25.845833 kubelet[3206]: I0128 00:23:25.845798 3206 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:23:25.846000 audit[3221]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:25.846000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff20cff90 a2=0 a3=0 items=0 ppid=3206 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:23:25.848000 audit[3223]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:25.848000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffca6abea0 a2=0 a3=0 items=0 ppid=3206 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.848000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:23:25.858000 audit[3226]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:25.858000 audit[3226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcca6ccd0 a2=0 a3=0 items=0 ppid=3206 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 00:23:25.859335 kubelet[3206]: I0128 00:23:25.859224 3206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:23:25.859000 audit[3228]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:25.859000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc9fb9690 a2=0 a3=0 items=0 ppid=3206 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 00:23:25.860031 kubelet[3206]: I0128 00:23:25.860019 3206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:23:25.860087 kubelet[3206]: I0128 00:23:25.860080 3206 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:23:25.860142 kubelet[3206]: I0128 00:23:25.860135 3206 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:23:25.860358 kubelet[3206]: I0128 00:23:25.860172 3206 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:23:25.860358 kubelet[3206]: E0128 00:23:25.860206 3206 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:23:25.860000 audit[3229]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:25.860000 audit[3229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff204b890 a2=0 a3=0 items=0 ppid=3206 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 00:23:25.860000 audit[3230]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:25.860000 audit[3230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff36681f0 a2=0 a3=0 items=0 ppid=3206 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 00:23:25.861000 audit[3231]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:25.861000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4977e80 a2=0 a3=0 items=0 ppid=3206 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.861000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 00:23:25.862000 audit[3232]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:25.862000 audit[3232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9cea870 a2=0 a3=0 items=0 ppid=3206 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 00:23:25.863000 audit[3233]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:25.863000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf523600 a2=0 a3=0 items=0 ppid=3206 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 00:23:25.863000 audit[3234]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:25.863000 audit[3234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe10cc060 a2=0 a3=0 items=0 ppid=3206 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:25.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 00:23:25.865127 kubelet[3206]: W0128 00:23:25.865103 3206 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 28 00:23:25.865795 kubelet[3206]: E0128 00:23:25.865364 3206 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:23:25.868344 kubelet[3206]: E0128 00:23:25.868323 3206 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:23:25.872229 kubelet[3206]: I0128 00:23:25.872210 3206 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:23:25.872229 kubelet[3206]: I0128 00:23:25.872224 3206 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:23:25.872305 kubelet[3206]: I0128 00:23:25.872239 3206 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:23:25.879238 kubelet[3206]: I0128 00:23:25.879220 3206 policy_none.go:49] "None policy: Start" Jan 28 00:23:25.879238 kubelet[3206]: I0128 00:23:25.879238 3206 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:23:25.879309 kubelet[3206]: I0128 00:23:25.879247 3206 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:23:25.889795 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 00:23:25.901010 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 00:23:25.903859 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 00:23:25.914378 kubelet[3206]: I0128 00:23:25.914359 3206 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:23:25.914500 kubelet[3206]: I0128 00:23:25.914485 3206 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:23:25.914529 kubelet[3206]: I0128 00:23:25.914497 3206 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:23:25.915281 kubelet[3206]: I0128 00:23:25.914726 3206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:23:25.916351 kubelet[3206]: E0128 00:23:25.916336 3206 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:23:25.916449 kubelet[3206]: E0128 00:23:25.916432 3206 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.1.0-n-77eb5aaac5\" not found" Jan 28 00:23:25.967900 systemd[1]: Created slice kubepods-burstable-pod4975761530c20a439e519570c6a1988f.slice - libcontainer container kubepods-burstable-pod4975761530c20a439e519570c6a1988f.slice. Jan 28 00:23:25.977621 kubelet[3206]: E0128 00:23:25.977591 3206 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-n-77eb5aaac5\" not found" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:25.979907 systemd[1]: Created slice kubepods-burstable-podefa124f5c527d00585677f9fe8ab1050.slice - libcontainer container kubepods-burstable-podefa124f5c527d00585677f9fe8ab1050.slice. Jan 28 00:23:25.981898 kubelet[3206]: E0128 00:23:25.981878 3206 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-n-77eb5aaac5\" not found" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:25.983961 systemd[1]: Created slice kubepods-burstable-podce46e34d7539e5d95e38a18ed0dc09ac.slice - libcontainer container kubepods-burstable-podce46e34d7539e5d95e38a18ed0dc09ac.slice. Jan 28 00:23:25.985413 kubelet[3206]: E0128 00:23:25.985396 3206 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-n-77eb5aaac5\" not found" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.015373 kubelet[3206]: I0128 00:23:26.015349 3206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.015887 kubelet[3206]: E0128 00:23:26.015860 3206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044287 kubelet[3206]: I0128 00:23:26.044067 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044287 kubelet[3206]: I0128 00:23:26.044094 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4975761530c20a439e519570c6a1988f-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" (UID: \"4975761530c20a439e519570c6a1988f\") " pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044287 kubelet[3206]: I0128 00:23:26.044105 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4975761530c20a439e519570c6a1988f-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" (UID: \"4975761530c20a439e519570c6a1988f\") " pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044287 kubelet[3206]: I0128 00:23:26.044115 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044287 kubelet[3206]: I0128 00:23:26.044131 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044409 kubelet[3206]: I0128 00:23:26.044140 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044409 kubelet[3206]: I0128 00:23:26.044148 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4975761530c20a439e519570c6a1988f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" (UID: \"4975761530c20a439e519570c6a1988f\") " pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044409 kubelet[3206]: I0128 00:23:26.044158 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044409 kubelet[3206]: I0128 00:23:26.044167 3206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce46e34d7539e5d95e38a18ed0dc09ac-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-n-77eb5aaac5\" (UID: \"ce46e34d7539e5d95e38a18ed0dc09ac\") " pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.044409 kubelet[3206]: E0128 00:23:26.044210 3206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-n-77eb5aaac5?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="400ms" Jan 28 00:23:26.218117 kubelet[3206]: I0128 00:23:26.218086 3206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.218402 kubelet[3206]: E0128 00:23:26.218376 3206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.278525 containerd[2198]: time="2026-01-28T00:23:26.278488129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-n-77eb5aaac5,Uid:4975761530c20a439e519570c6a1988f,Namespace:kube-system,Attempt:0,}" Jan 28 00:23:26.282984 containerd[2198]: time="2026-01-28T00:23:26.282948067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-n-77eb5aaac5,Uid:efa124f5c527d00585677f9fe8ab1050,Namespace:kube-system,Attempt:0,}" Jan 28 00:23:26.286571 containerd[2198]: time="2026-01-28T00:23:26.286537344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-n-77eb5aaac5,Uid:ce46e34d7539e5d95e38a18ed0dc09ac,Namespace:kube-system,Attempt:0,}" Jan 28 00:23:26.373518 containerd[2198]: time="2026-01-28T00:23:26.373403921Z" level=info msg="connecting to shim a926befa90cdca9ff97f9af7e8f757c9ca44c12bd519d715296a0da788bf5ef6" address="unix:///run/containerd/s/b83d158caed28db2f773cdaa7e2fd31d2f5729fbd626dce34fbe34947b02cf33" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:23:26.373613 containerd[2198]: time="2026-01-28T00:23:26.373402673Z" level=info msg="connecting to shim 06f20e7c2d0ef306ca5fe65ca4f56758d95bc27daaa1ecdccbe0a30ded1146ce" address="unix:///run/containerd/s/e629ff377f4df5aac98ad5b9e51de071c0cf7016f3b7dae6f5cba13b45da26bb" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:23:26.401497 containerd[2198]: time="2026-01-28T00:23:26.399271933Z" level=info msg="connecting to shim e8ee8dba302458e495611eac0c91ccf4305edb95402bf8d12c1fded3b29815c9" address="unix:///run/containerd/s/77ef27d1ac5d3a2f4f5cddfa5b7c00f292e77041629f5152d7ccf927e23993eb" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:23:26.404026 systemd[1]: Started cri-containerd-a926befa90cdca9ff97f9af7e8f757c9ca44c12bd519d715296a0da788bf5ef6.scope - libcontainer container a926befa90cdca9ff97f9af7e8f757c9ca44c12bd519d715296a0da788bf5ef6. Jan 28 00:23:26.407751 systemd[1]: Started cri-containerd-06f20e7c2d0ef306ca5fe65ca4f56758d95bc27daaa1ecdccbe0a30ded1146ce.scope - libcontainer container 06f20e7c2d0ef306ca5fe65ca4f56758d95bc27daaa1ecdccbe0a30ded1146ce. Jan 28 00:23:26.421000 audit: BPF prog-id=106 op=LOAD Jan 28 00:23:26.422000 audit: BPF prog-id=107 op=LOAD Jan 28 00:23:26.422000 audit: BPF prog-id=108 op=LOAD Jan 28 00:23:26.422000 audit[3275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139323662656661393063646361396666393766396166376538663735 Jan 28 00:23:26.422000 audit: BPF prog-id=108 op=UNLOAD Jan 28 00:23:26.422000 audit[3275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139323662656661393063646361396666393766396166376538663735 Jan 28 00:23:26.422000 audit: BPF prog-id=109 op=LOAD Jan 28 00:23:26.422000 audit[3277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174180 a2=98 a3=0 items=0 ppid=3245 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663230653763326430656633303663613566653635636134663536 Jan 28 00:23:26.422000 audit: BPF prog-id=109 op=UNLOAD Jan 28 00:23:26.422000 audit[3277]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663230653763326430656633303663613566653635636134663536 Jan 28 00:23:26.423000 audit: BPF prog-id=110 op=LOAD Jan 28 00:23:26.423000 audit[3275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139323662656661393063646361396666393766396166376538663735 Jan 28 00:23:26.423000 audit: BPF prog-id=111 op=LOAD Jan 28 00:23:26.423000 audit[3275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139323662656661393063646361396666393766396166376538663735 Jan 28 00:23:26.423000 audit: BPF prog-id=111 op=UNLOAD Jan 28 00:23:26.423000 audit[3275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139323662656661393063646361396666393766396166376538663735 Jan 28 00:23:26.423000 audit: BPF prog-id=110 op=UNLOAD Jan 28 00:23:26.423000 audit[3275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139323662656661393063646361396666393766396166376538663735 Jan 28 00:23:26.423000 audit: BPF prog-id=112 op=LOAD Jan 28 00:23:26.423000 audit[3275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139323662656661393063646361396666393766396166376538663735 Jan 28 00:23:26.423000 audit: BPF prog-id=113 op=LOAD Jan 28 00:23:26.423000 audit[3277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001743e8 a2=98 a3=0 items=0 ppid=3245 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663230653763326430656633303663613566653635636134663536 Jan 28 00:23:26.423000 audit: BPF prog-id=114 op=LOAD Jan 28 00:23:26.423000 audit[3277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000174168 a2=98 a3=0 items=0 ppid=3245 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663230653763326430656633303663613566653635636134663536 Jan 28 00:23:26.423000 audit: BPF prog-id=114 op=UNLOAD Jan 28 00:23:26.423000 audit[3277]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663230653763326430656633303663613566653635636134663536 Jan 28 00:23:26.423000 audit: BPF prog-id=113 op=UNLOAD Jan 28 00:23:26.423000 audit[3277]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663230653763326430656633303663613566653635636134663536 Jan 28 00:23:26.423000 audit: BPF prog-id=115 op=LOAD Jan 28 00:23:26.423000 audit[3277]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174648 a2=98 a3=0 items=0 ppid=3245 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036663230653763326430656633303663613566653635636134663536 Jan 28 00:23:26.432949 systemd[1]: Started cri-containerd-e8ee8dba302458e495611eac0c91ccf4305edb95402bf8d12c1fded3b29815c9.scope - libcontainer container e8ee8dba302458e495611eac0c91ccf4305edb95402bf8d12c1fded3b29815c9. Jan 28 00:23:26.445828 kubelet[3206]: E0128 00:23:26.445312 3206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.1.0-n-77eb5aaac5?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="800ms" Jan 28 00:23:26.453000 audit: BPF prog-id=116 op=LOAD Jan 28 00:23:26.454000 audit: BPF prog-id=117 op=LOAD Jan 28 00:23:26.454000 audit[3324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3304 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538656538646261333032343538653439353631316561633063393163 Jan 28 00:23:26.454000 audit: BPF prog-id=117 op=UNLOAD Jan 28 00:23:26.454000 audit[3324]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538656538646261333032343538653439353631316561633063393163 Jan 28 00:23:26.455490 containerd[2198]: time="2026-01-28T00:23:26.455455901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.1.0-n-77eb5aaac5,Uid:efa124f5c527d00585677f9fe8ab1050,Namespace:kube-system,Attempt:0,} returns sandbox id \"a926befa90cdca9ff97f9af7e8f757c9ca44c12bd519d715296a0da788bf5ef6\"" Jan 28 00:23:26.454000 audit: BPF prog-id=118 op=LOAD Jan 28 00:23:26.454000 audit[3324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3304 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538656538646261333032343538653439353631316561633063393163 Jan 28 00:23:26.455000 audit: BPF prog-id=119 op=LOAD Jan 28 00:23:26.455000 audit[3324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3304 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538656538646261333032343538653439353631316561633063393163 Jan 28 00:23:26.455000 audit: BPF prog-id=119 op=UNLOAD Jan 28 00:23:26.455000 audit[3324]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538656538646261333032343538653439353631316561633063393163 Jan 28 00:23:26.455000 audit: BPF prog-id=118 op=UNLOAD Jan 28 00:23:26.455000 audit[3324]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538656538646261333032343538653439353631316561633063393163 Jan 28 00:23:26.455000 audit: BPF prog-id=120 op=LOAD Jan 28 00:23:26.455000 audit[3324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3304 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538656538646261333032343538653439353631316561633063393163 Jan 28 00:23:26.459297 containerd[2198]: time="2026-01-28T00:23:26.459268521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.1.0-n-77eb5aaac5,Uid:4975761530c20a439e519570c6a1988f,Namespace:kube-system,Attempt:0,} returns sandbox id \"06f20e7c2d0ef306ca5fe65ca4f56758d95bc27daaa1ecdccbe0a30ded1146ce\"" Jan 28 00:23:26.459639 containerd[2198]: time="2026-01-28T00:23:26.459612260Z" level=info msg="CreateContainer within sandbox \"a926befa90cdca9ff97f9af7e8f757c9ca44c12bd519d715296a0da788bf5ef6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:23:26.462848 containerd[2198]: time="2026-01-28T00:23:26.462350694Z" level=info msg="CreateContainer within sandbox \"06f20e7c2d0ef306ca5fe65ca4f56758d95bc27daaa1ecdccbe0a30ded1146ce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:23:26.481846 containerd[2198]: time="2026-01-28T00:23:26.481620026Z" level=info msg="Container 017d6ea0259e8c5e2e8bf62ee725fb533d8361cf56f64fbbfa61e315e9218c21: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:23:26.488799 containerd[2198]: time="2026-01-28T00:23:26.488773644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.1.0-n-77eb5aaac5,Uid:ce46e34d7539e5d95e38a18ed0dc09ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8ee8dba302458e495611eac0c91ccf4305edb95402bf8d12c1fded3b29815c9\"" Jan 28 00:23:26.490930 containerd[2198]: time="2026-01-28T00:23:26.490909489Z" level=info msg="CreateContainer within sandbox \"e8ee8dba302458e495611eac0c91ccf4305edb95402bf8d12c1fded3b29815c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:23:26.500260 containerd[2198]: time="2026-01-28T00:23:26.500237881Z" level=info msg="Container f2d5f40f7ff3c1ea11eea9a6f490b8761ab38dd6ce47cd3d9c79e2194fec0880: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:23:26.509365 containerd[2198]: time="2026-01-28T00:23:26.509336834Z" level=info msg="CreateContainer within sandbox \"a926befa90cdca9ff97f9af7e8f757c9ca44c12bd519d715296a0da788bf5ef6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"017d6ea0259e8c5e2e8bf62ee725fb533d8361cf56f64fbbfa61e315e9218c21\"" Jan 28 00:23:26.509869 containerd[2198]: time="2026-01-28T00:23:26.509773720Z" level=info msg="StartContainer for \"017d6ea0259e8c5e2e8bf62ee725fb533d8361cf56f64fbbfa61e315e9218c21\"" Jan 28 00:23:26.510834 containerd[2198]: time="2026-01-28T00:23:26.510665509Z" level=info msg="connecting to shim 017d6ea0259e8c5e2e8bf62ee725fb533d8361cf56f64fbbfa61e315e9218c21" address="unix:///run/containerd/s/b83d158caed28db2f773cdaa7e2fd31d2f5729fbd626dce34fbe34947b02cf33" protocol=ttrpc version=3 Jan 28 00:23:26.526974 systemd[1]: Started cri-containerd-017d6ea0259e8c5e2e8bf62ee725fb533d8361cf56f64fbbfa61e315e9218c21.scope - libcontainer container 017d6ea0259e8c5e2e8bf62ee725fb533d8361cf56f64fbbfa61e315e9218c21. Jan 28 00:23:26.530218 containerd[2198]: time="2026-01-28T00:23:26.530042805Z" level=info msg="CreateContainer within sandbox \"06f20e7c2d0ef306ca5fe65ca4f56758d95bc27daaa1ecdccbe0a30ded1146ce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f2d5f40f7ff3c1ea11eea9a6f490b8761ab38dd6ce47cd3d9c79e2194fec0880\"" Jan 28 00:23:26.530887 containerd[2198]: time="2026-01-28T00:23:26.530860216Z" level=info msg="StartContainer for \"f2d5f40f7ff3c1ea11eea9a6f490b8761ab38dd6ce47cd3d9c79e2194fec0880\"" Jan 28 00:23:26.531547 containerd[2198]: time="2026-01-28T00:23:26.531522222Z" level=info msg="connecting to shim f2d5f40f7ff3c1ea11eea9a6f490b8761ab38dd6ce47cd3d9c79e2194fec0880" address="unix:///run/containerd/s/e629ff377f4df5aac98ad5b9e51de071c0cf7016f3b7dae6f5cba13b45da26bb" protocol=ttrpc version=3 Jan 28 00:23:26.532849 containerd[2198]: time="2026-01-28T00:23:26.532702444Z" level=info msg="Container 7cb1ff70087b5190ee0c0e5b5d11accc47f802b7a136dae4d0aa2a4b5a87e3f7: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:23:26.542000 audit: BPF prog-id=121 op=LOAD Jan 28 00:23:26.543000 audit: BPF prog-id=122 op=LOAD Jan 28 00:23:26.543000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=3263 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031376436656130323539653863356532653862663632656537323566 Jan 28 00:23:26.543000 audit: BPF prog-id=122 op=UNLOAD Jan 28 00:23:26.543000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031376436656130323539653863356532653862663632656537323566 Jan 28 00:23:26.543000 audit: BPF prog-id=123 op=LOAD Jan 28 00:23:26.543000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=3263 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031376436656130323539653863356532653862663632656537323566 Jan 28 00:23:26.543000 audit: BPF prog-id=124 op=LOAD Jan 28 00:23:26.543000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=3263 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031376436656130323539653863356532653862663632656537323566 Jan 28 00:23:26.543000 audit: BPF prog-id=124 op=UNLOAD Jan 28 00:23:26.543000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031376436656130323539653863356532653862663632656537323566 Jan 28 00:23:26.543000 audit: BPF prog-id=123 op=UNLOAD Jan 28 00:23:26.543000 audit[3377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031376436656130323539653863356532653862663632656537323566 Jan 28 00:23:26.543000 audit: BPF prog-id=125 op=LOAD Jan 28 00:23:26.543000 audit[3377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=3263 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031376436656130323539653863356532653862663632656537323566 Jan 28 00:23:26.549992 systemd[1]: Started cri-containerd-f2d5f40f7ff3c1ea11eea9a6f490b8761ab38dd6ce47cd3d9c79e2194fec0880.scope - libcontainer container f2d5f40f7ff3c1ea11eea9a6f490b8761ab38dd6ce47cd3d9c79e2194fec0880. Jan 28 00:23:26.555065 containerd[2198]: time="2026-01-28T00:23:26.554657080Z" level=info msg="CreateContainer within sandbox \"e8ee8dba302458e495611eac0c91ccf4305edb95402bf8d12c1fded3b29815c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7cb1ff70087b5190ee0c0e5b5d11accc47f802b7a136dae4d0aa2a4b5a87e3f7\"" Jan 28 00:23:26.555522 containerd[2198]: time="2026-01-28T00:23:26.555497532Z" level=info msg="StartContainer for \"7cb1ff70087b5190ee0c0e5b5d11accc47f802b7a136dae4d0aa2a4b5a87e3f7\"" Jan 28 00:23:26.556541 containerd[2198]: time="2026-01-28T00:23:26.556435178Z" level=info msg="connecting to shim 7cb1ff70087b5190ee0c0e5b5d11accc47f802b7a136dae4d0aa2a4b5a87e3f7" address="unix:///run/containerd/s/77ef27d1ac5d3a2f4f5cddfa5b7c00f292e77041629f5152d7ccf927e23993eb" protocol=ttrpc version=3 Jan 28 00:23:26.564000 audit: BPF prog-id=126 op=LOAD Jan 28 00:23:26.564000 audit: BPF prog-id=127 op=LOAD Jan 28 00:23:26.564000 audit[3397]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3245 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632643566343066376666336331656131316565613961366634393062 Jan 28 00:23:26.564000 audit: BPF prog-id=127 op=UNLOAD Jan 28 00:23:26.564000 audit[3397]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632643566343066376666336331656131316565613961366634393062 Jan 28 00:23:26.564000 audit: BPF prog-id=128 op=LOAD Jan 28 00:23:26.564000 audit[3397]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3245 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632643566343066376666336331656131316565613961366634393062 Jan 28 00:23:26.564000 audit: BPF prog-id=129 op=LOAD Jan 28 00:23:26.564000 audit[3397]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3245 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632643566343066376666336331656131316565613961366634393062 Jan 28 00:23:26.565000 audit: BPF prog-id=129 op=UNLOAD Jan 28 00:23:26.565000 audit[3397]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632643566343066376666336331656131316565613961366634393062 Jan 28 00:23:26.565000 audit: BPF prog-id=128 op=UNLOAD Jan 28 00:23:26.565000 audit[3397]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632643566343066376666336331656131316565613961366634393062 Jan 28 00:23:26.565000 audit: BPF prog-id=130 op=LOAD Jan 28 00:23:26.565000 audit[3397]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3245 pid=3397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632643566343066376666336331656131316565613961366634393062 Jan 28 00:23:26.582078 containerd[2198]: time="2026-01-28T00:23:26.582044757Z" level=info msg="StartContainer for \"017d6ea0259e8c5e2e8bf62ee725fb533d8361cf56f64fbbfa61e315e9218c21\" returns successfully" Jan 28 00:23:26.584100 systemd[1]: Started cri-containerd-7cb1ff70087b5190ee0c0e5b5d11accc47f802b7a136dae4d0aa2a4b5a87e3f7.scope - libcontainer container 7cb1ff70087b5190ee0c0e5b5d11accc47f802b7a136dae4d0aa2a4b5a87e3f7. Jan 28 00:23:26.603636 containerd[2198]: time="2026-01-28T00:23:26.603612973Z" level=info msg="StartContainer for \"f2d5f40f7ff3c1ea11eea9a6f490b8761ab38dd6ce47cd3d9c79e2194fec0880\" returns successfully" Jan 28 00:23:26.605000 audit: BPF prog-id=131 op=LOAD Jan 28 00:23:26.605000 audit: BPF prog-id=132 op=LOAD Jan 28 00:23:26.605000 audit[3417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3304 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623166663730303837623531393065653063306535623564313161 Jan 28 00:23:26.605000 audit: BPF prog-id=132 op=UNLOAD Jan 28 00:23:26.605000 audit[3417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623166663730303837623531393065653063306535623564313161 Jan 28 00:23:26.606000 audit: BPF prog-id=133 op=LOAD Jan 28 00:23:26.606000 audit[3417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3304 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623166663730303837623531393065653063306535623564313161 Jan 28 00:23:26.606000 audit: BPF prog-id=134 op=LOAD Jan 28 00:23:26.606000 audit[3417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3304 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623166663730303837623531393065653063306535623564313161 Jan 28 00:23:26.606000 audit: BPF prog-id=134 op=UNLOAD Jan 28 00:23:26.606000 audit[3417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623166663730303837623531393065653063306535623564313161 Jan 28 00:23:26.606000 audit: BPF prog-id=133 op=UNLOAD Jan 28 00:23:26.606000 audit[3417]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3304 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623166663730303837623531393065653063306535623564313161 Jan 28 00:23:26.606000 audit: BPF prog-id=135 op=LOAD Jan 28 00:23:26.606000 audit[3417]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3304 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:26.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763623166663730303837623531393065653063306535623564313161 Jan 28 00:23:26.621706 kubelet[3206]: I0128 00:23:26.621388 3206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.621706 kubelet[3206]: E0128 00:23:26.621664 3206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.641891 containerd[2198]: time="2026-01-28T00:23:26.641400309Z" level=info msg="StartContainer for \"7cb1ff70087b5190ee0c0e5b5d11accc47f802b7a136dae4d0aa2a4b5a87e3f7\" returns successfully" Jan 28 00:23:26.878490 kubelet[3206]: E0128 00:23:26.878156 3206 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-n-77eb5aaac5\" not found" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.881475 kubelet[3206]: E0128 00:23:26.881456 3206 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-n-77eb5aaac5\" not found" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:26.884108 kubelet[3206]: E0128 00:23:26.883879 3206 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.1.0-n-77eb5aaac5\" not found" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.423588 kubelet[3206]: I0128 00:23:27.423536 3206 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.579164 kubelet[3206]: E0128 00:23:27.579122 3206 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.1.0-n-77eb5aaac5\" not found" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.657476 kubelet[3206]: I0128 00:23:27.657437 3206 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.742536 kubelet[3206]: I0128 00:23:27.742447 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.756767 kubelet[3206]: E0128 00:23:27.756634 3206 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.756767 kubelet[3206]: I0128 00:23:27.756653 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.759870 kubelet[3206]: E0128 00:23:27.759850 3206 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.760084 kubelet[3206]: I0128 00:23:27.760020 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.761055 kubelet[3206]: E0128 00:23:27.761037 3206 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-n-77eb5aaac5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.833505 kubelet[3206]: I0128 00:23:27.833475 3206 apiserver.go:52] "Watching apiserver" Jan 28 00:23:27.842609 kubelet[3206]: I0128 00:23:27.842592 3206 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:23:27.884484 kubelet[3206]: I0128 00:23:27.884331 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.884685 kubelet[3206]: I0128 00:23:27.884671 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.885859 kubelet[3206]: E0128 00:23:27.885841 3206 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:27.886912 kubelet[3206]: E0128 00:23:27.886801 3206 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-n-77eb5aaac5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:28.886048 kubelet[3206]: I0128 00:23:28.885960 3206 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:28.894235 kubelet[3206]: W0128 00:23:28.894158 3206 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:23:28.899892 update_engine[2166]: I20260128 00:23:28.899844 2166 update_attempter.cc:509] Updating boot flags... Jan 28 00:23:29.750746 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 28 00:23:29.811333 systemd[1]: Reload requested from client PID 3591 ('systemctl') (unit session-10.scope)... Jan 28 00:23:29.811348 systemd[1]: Reloading... Jan 28 00:23:29.900843 zram_generator::config[3647]: No configuration found. Jan 28 00:23:30.082580 systemd[1]: Reloading finished in 270 ms. Jan 28 00:23:30.111995 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:23:30.127442 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:23:30.127688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:23:30.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:30.130498 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 28 00:23:30.130539 kernel: audit: type=1131 audit(1769559810.126:404): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:30.130600 systemd[1]: kubelet.service: Consumed 735ms CPU time, 125.7M memory peak. Jan 28 00:23:30.134006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:23:30.143000 audit: BPF prog-id=136 op=LOAD Jan 28 00:23:30.149841 kernel: audit: type=1334 audit(1769559810.143:405): prog-id=136 op=LOAD Jan 28 00:23:30.150000 audit: BPF prog-id=137 op=LOAD Jan 28 00:23:30.150000 audit: BPF prog-id=98 op=UNLOAD Jan 28 00:23:30.162397 kernel: audit: type=1334 audit(1769559810.150:406): prog-id=137 op=LOAD Jan 28 00:23:30.162443 kernel: audit: type=1334 audit(1769559810.150:407): prog-id=98 op=UNLOAD Jan 28 00:23:30.150000 audit: BPF prog-id=99 op=UNLOAD Jan 28 00:23:30.167072 kernel: audit: type=1334 audit(1769559810.150:408): prog-id=99 op=UNLOAD Jan 28 00:23:30.155000 audit: BPF prog-id=138 op=LOAD Jan 28 00:23:30.171697 kernel: audit: type=1334 audit(1769559810.155:409): prog-id=138 op=LOAD Jan 28 00:23:30.155000 audit: BPF prog-id=103 op=UNLOAD Jan 28 00:23:30.176784 kernel: audit: type=1334 audit(1769559810.155:410): prog-id=103 op=UNLOAD Jan 28 00:23:30.155000 audit: BPF prog-id=139 op=LOAD Jan 28 00:23:30.181246 kernel: audit: type=1334 audit(1769559810.155:411): prog-id=139 op=LOAD Jan 28 00:23:30.155000 audit: BPF prog-id=140 op=LOAD Jan 28 00:23:30.185714 kernel: audit: type=1334 audit(1769559810.155:412): prog-id=140 op=LOAD Jan 28 00:23:30.155000 audit: BPF prog-id=104 op=UNLOAD Jan 28 00:23:30.190187 kernel: audit: type=1334 audit(1769559810.155:413): prog-id=104 op=UNLOAD Jan 28 00:23:30.155000 audit: BPF prog-id=105 op=UNLOAD Jan 28 00:23:30.161000 audit: BPF prog-id=141 op=LOAD Jan 28 00:23:30.161000 audit: BPF prog-id=94 op=UNLOAD Jan 28 00:23:30.161000 audit: BPF prog-id=142 op=LOAD Jan 28 00:23:30.161000 audit: BPF prog-id=143 op=LOAD Jan 28 00:23:30.161000 audit: BPF prog-id=95 op=UNLOAD Jan 28 00:23:30.161000 audit: BPF prog-id=96 op=UNLOAD Jan 28 00:23:30.166000 audit: BPF prog-id=144 op=LOAD Jan 28 00:23:30.166000 audit: BPF prog-id=87 op=UNLOAD Jan 28 00:23:30.170000 audit: BPF prog-id=145 op=LOAD Jan 28 00:23:30.170000 audit: BPF prog-id=91 op=UNLOAD Jan 28 00:23:30.170000 audit: BPF prog-id=146 op=LOAD Jan 28 00:23:30.175000 audit: BPF prog-id=147 op=LOAD Jan 28 00:23:30.175000 audit: BPF prog-id=92 op=UNLOAD Jan 28 00:23:30.175000 audit: BPF prog-id=93 op=UNLOAD Jan 28 00:23:30.180000 audit: BPF prog-id=148 op=LOAD Jan 28 00:23:30.180000 audit: BPF prog-id=100 op=UNLOAD Jan 28 00:23:30.180000 audit: BPF prog-id=149 op=LOAD Jan 28 00:23:30.184000 audit: BPF prog-id=150 op=LOAD Jan 28 00:23:30.184000 audit: BPF prog-id=101 op=UNLOAD Jan 28 00:23:30.184000 audit: BPF prog-id=102 op=UNLOAD Jan 28 00:23:30.189000 audit: BPF prog-id=151 op=LOAD Jan 28 00:23:30.189000 audit: BPF prog-id=97 op=UNLOAD Jan 28 00:23:30.190000 audit: BPF prog-id=152 op=LOAD Jan 28 00:23:30.190000 audit: BPF prog-id=86 op=UNLOAD Jan 28 00:23:30.190000 audit: BPF prog-id=153 op=LOAD Jan 28 00:23:30.190000 audit: BPF prog-id=88 op=UNLOAD Jan 28 00:23:30.190000 audit: BPF prog-id=154 op=LOAD Jan 28 00:23:30.190000 audit: BPF prog-id=155 op=LOAD Jan 28 00:23:30.190000 audit: BPF prog-id=89 op=UNLOAD Jan 28 00:23:30.190000 audit: BPF prog-id=90 op=UNLOAD Jan 28 00:23:30.288947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:23:30.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:30.303027 (kubelet)[3705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:23:30.329836 kubelet[3705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:23:30.329836 kubelet[3705]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:23:30.329836 kubelet[3705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:23:30.329836 kubelet[3705]: I0128 00:23:30.329780 3705 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:23:30.334889 kubelet[3705]: I0128 00:23:30.333840 3705 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:23:30.334889 kubelet[3705]: I0128 00:23:30.333861 3705 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:23:30.334889 kubelet[3705]: I0128 00:23:30.334034 3705 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:23:30.335483 kubelet[3705]: I0128 00:23:30.335464 3705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 00:23:30.337679 kubelet[3705]: I0128 00:23:30.337280 3705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:23:30.340618 kubelet[3705]: I0128 00:23:30.340600 3705 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:23:30.343840 kubelet[3705]: I0128 00:23:30.343611 3705 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:23:30.343840 kubelet[3705]: I0128 00:23:30.343768 3705 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:23:30.343933 kubelet[3705]: I0128 00:23:30.343783 3705 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.1.0-n-77eb5aaac5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:23:30.343933 kubelet[3705]: I0128 00:23:30.343918 3705 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:23:30.343933 kubelet[3705]: I0128 00:23:30.343924 3705 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:23:30.344020 kubelet[3705]: I0128 00:23:30.343956 3705 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:23:30.344060 kubelet[3705]: I0128 00:23:30.344044 3705 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:23:30.344121 kubelet[3705]: I0128 00:23:30.344111 3705 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:23:30.344147 kubelet[3705]: I0128 00:23:30.344131 3705 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:23:30.344147 kubelet[3705]: I0128 00:23:30.344139 3705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:23:30.345761 kubelet[3705]: I0128 00:23:30.345745 3705 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 00:23:30.346206 kubelet[3705]: I0128 00:23:30.346193 3705 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:23:30.346673 kubelet[3705]: I0128 00:23:30.346660 3705 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:23:30.346788 kubelet[3705]: I0128 00:23:30.346775 3705 server.go:1287] "Started kubelet" Jan 28 00:23:30.348403 kubelet[3705]: I0128 00:23:30.348389 3705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:23:30.352862 kubelet[3705]: I0128 00:23:30.352806 3705 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:23:30.353489 kubelet[3705]: I0128 00:23:30.353474 3705 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:23:30.354570 kubelet[3705]: I0128 00:23:30.354536 3705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:23:30.354811 kubelet[3705]: I0128 00:23:30.354798 3705 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:23:30.355138 kubelet[3705]: I0128 00:23:30.355114 3705 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:23:30.356136 kubelet[3705]: I0128 00:23:30.356123 3705 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:23:30.356373 kubelet[3705]: E0128 00:23:30.356353 3705 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.1.0-n-77eb5aaac5\" not found" Jan 28 00:23:30.357943 kubelet[3705]: I0128 00:23:30.357928 3705 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:23:30.358120 kubelet[3705]: I0128 00:23:30.358110 3705 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:23:30.359331 kubelet[3705]: I0128 00:23:30.359304 3705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:23:30.360133 kubelet[3705]: I0128 00:23:30.360117 3705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:23:30.360214 kubelet[3705]: I0128 00:23:30.360207 3705 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:23:30.360262 kubelet[3705]: I0128 00:23:30.360255 3705 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:23:30.360296 kubelet[3705]: I0128 00:23:30.360290 3705 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:23:30.360369 kubelet[3705]: E0128 00:23:30.360357 3705 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:23:30.367807 kubelet[3705]: I0128 00:23:30.367787 3705 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:23:30.368028 kubelet[3705]: I0128 00:23:30.368011 3705 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:23:30.369297 kubelet[3705]: I0128 00:23:30.369277 3705 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:23:30.411368 kubelet[3705]: I0128 00:23:30.411341 3705 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:23:30.411368 kubelet[3705]: I0128 00:23:30.411357 3705 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:23:30.411368 kubelet[3705]: I0128 00:23:30.411374 3705 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:23:30.411920 kubelet[3705]: I0128 00:23:30.411781 3705 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:23:30.411920 kubelet[3705]: I0128 00:23:30.411793 3705 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:23:30.411920 kubelet[3705]: I0128 00:23:30.411811 3705 policy_none.go:49] "None policy: Start" Jan 28 00:23:30.411920 kubelet[3705]: I0128 00:23:30.411909 3705 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:23:30.411920 kubelet[3705]: I0128 00:23:30.411927 3705 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:23:30.412720 kubelet[3705]: I0128 00:23:30.412196 3705 state_mem.go:75] "Updated machine memory state" Jan 28 00:23:30.416830 kubelet[3705]: I0128 00:23:30.416800 3705 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:23:30.416953 kubelet[3705]: I0128 00:23:30.416937 3705 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:23:30.416984 kubelet[3705]: I0128 00:23:30.416953 3705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:23:30.417967 kubelet[3705]: I0128 00:23:30.417794 3705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:23:30.418689 kubelet[3705]: E0128 00:23:30.418491 3705 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:23:30.460776 kubelet[3705]: I0128 00:23:30.460728 3705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.460776 kubelet[3705]: I0128 00:23:30.460735 3705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.461513 kubelet[3705]: I0128 00:23:30.461496 3705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.468987 kubelet[3705]: W0128 00:23:30.468652 3705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:23:30.473447 kubelet[3705]: W0128 00:23:30.473429 3705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:23:30.474135 kubelet[3705]: W0128 00:23:30.474033 3705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:23:30.474135 kubelet[3705]: E0128 00:23:30.474068 3705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-n-77eb5aaac5\" already exists" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.519367 kubelet[3705]: I0128 00:23:30.519184 3705 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.528865 kubelet[3705]: I0128 00:23:30.528838 3705 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.528972 kubelet[3705]: I0128 00:23:30.528910 3705 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659521 kubelet[3705]: I0128 00:23:30.659422 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce46e34d7539e5d95e38a18ed0dc09ac-kubeconfig\") pod \"kube-scheduler-ci-4547.1.0-n-77eb5aaac5\" (UID: \"ce46e34d7539e5d95e38a18ed0dc09ac\") " pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659521 kubelet[3705]: I0128 00:23:30.659455 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4975761530c20a439e519570c6a1988f-ca-certs\") pod \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" (UID: \"4975761530c20a439e519570c6a1988f\") " pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659521 kubelet[3705]: I0128 00:23:30.659469 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4975761530c20a439e519570c6a1988f-k8s-certs\") pod \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" (UID: \"4975761530c20a439e519570c6a1988f\") " pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659521 kubelet[3705]: I0128 00:23:30.659480 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4975761530c20a439e519570c6a1988f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" (UID: \"4975761530c20a439e519570c6a1988f\") " pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659521 kubelet[3705]: I0128 00:23:30.659493 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-ca-certs\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659682 kubelet[3705]: I0128 00:23:30.659504 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-kubeconfig\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659682 kubelet[3705]: I0128 00:23:30.659520 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659682 kubelet[3705]: I0128 00:23:30.659531 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-k8s-certs\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:30.659682 kubelet[3705]: I0128 00:23:30.659542 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efa124f5c527d00585677f9fe8ab1050-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.1.0-n-77eb5aaac5\" (UID: \"efa124f5c527d00585677f9fe8ab1050\") " pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:31.345786 kubelet[3705]: I0128 00:23:31.345754 3705 apiserver.go:52] "Watching apiserver" Jan 28 00:23:31.358787 kubelet[3705]: I0128 00:23:31.358760 3705 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:23:31.402488 kubelet[3705]: I0128 00:23:31.402465 3705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:31.402780 kubelet[3705]: I0128 00:23:31.402760 3705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:31.414150 kubelet[3705]: W0128 00:23:31.413833 3705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:23:31.414150 kubelet[3705]: E0128 00:23:31.413871 3705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.1.0-n-77eb5aaac5\" already exists" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:31.414650 kubelet[3705]: W0128 00:23:31.414632 3705 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:23:31.414702 kubelet[3705]: E0128 00:23:31.414674 3705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.1.0-n-77eb5aaac5\" already exists" pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" Jan 28 00:23:31.429781 kubelet[3705]: I0128 00:23:31.429744 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.1.0-n-77eb5aaac5" podStartSLOduration=1.429734713 podStartE2EDuration="1.429734713s" podCreationTimestamp="2026-01-28 00:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:23:31.419955276 +0000 UTC m=+1.114552080" watchObservedRunningTime="2026-01-28 00:23:31.429734713 +0000 UTC m=+1.124331509" Jan 28 00:23:31.429904 kubelet[3705]: I0128 00:23:31.429876 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.1.0-n-77eb5aaac5" podStartSLOduration=3.429811531 podStartE2EDuration="3.429811531s" podCreationTimestamp="2026-01-28 00:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:23:31.429609421 +0000 UTC m=+1.124206225" watchObservedRunningTime="2026-01-28 00:23:31.429811531 +0000 UTC m=+1.124408335" Jan 28 00:23:31.448987 kubelet[3705]: I0128 00:23:31.448949 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.1.0-n-77eb5aaac5" podStartSLOduration=1.448941313 podStartE2EDuration="1.448941313s" podCreationTimestamp="2026-01-28 00:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:23:31.438955046 +0000 UTC m=+1.133551938" watchObservedRunningTime="2026-01-28 00:23:31.448941313 +0000 UTC m=+1.143538117" Jan 28 00:23:35.659974 kubelet[3705]: I0128 00:23:35.659875 3705 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:23:35.661052 containerd[2198]: time="2026-01-28T00:23:35.660532085Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:23:35.661254 kubelet[3705]: I0128 00:23:35.660680 3705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:23:36.471420 systemd[1]: Created slice kubepods-besteffort-pod3bc44387_b934_4b82_9409_c6fbf7823b50.slice - libcontainer container kubepods-besteffort-pod3bc44387_b934_4b82_9409_c6fbf7823b50.slice. Jan 28 00:23:36.498115 kubelet[3705]: I0128 00:23:36.498089 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc44387-b934-4b82-9409-c6fbf7823b50-lib-modules\") pod \"kube-proxy-fj7sb\" (UID: \"3bc44387-b934-4b82-9409-c6fbf7823b50\") " pod="kube-system/kube-proxy-fj7sb" Jan 28 00:23:36.498115 kubelet[3705]: I0128 00:23:36.498117 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3bc44387-b934-4b82-9409-c6fbf7823b50-kube-proxy\") pod \"kube-proxy-fj7sb\" (UID: \"3bc44387-b934-4b82-9409-c6fbf7823b50\") " pod="kube-system/kube-proxy-fj7sb" Jan 28 00:23:36.498222 kubelet[3705]: I0128 00:23:36.498131 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc44387-b934-4b82-9409-c6fbf7823b50-xtables-lock\") pod \"kube-proxy-fj7sb\" (UID: \"3bc44387-b934-4b82-9409-c6fbf7823b50\") " pod="kube-system/kube-proxy-fj7sb" Jan 28 00:23:36.498222 kubelet[3705]: I0128 00:23:36.498144 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcgd\" (UniqueName: \"kubernetes.io/projected/3bc44387-b934-4b82-9409-c6fbf7823b50-kube-api-access-9zcgd\") pod \"kube-proxy-fj7sb\" (UID: \"3bc44387-b934-4b82-9409-c6fbf7823b50\") " pod="kube-system/kube-proxy-fj7sb" Jan 28 00:23:36.779684 systemd[1]: Created slice kubepods-besteffort-podce13cd6c_88bd_4a2f_a5da_50c9bd786f19.slice - libcontainer container kubepods-besteffort-podce13cd6c_88bd_4a2f_a5da_50c9bd786f19.slice. Jan 28 00:23:36.783056 containerd[2198]: time="2026-01-28T00:23:36.783015884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fj7sb,Uid:3bc44387-b934-4b82-9409-c6fbf7823b50,Namespace:kube-system,Attempt:0,}" Jan 28 00:23:36.799810 kubelet[3705]: I0128 00:23:36.799778 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxqqb\" (UniqueName: \"kubernetes.io/projected/ce13cd6c-88bd-4a2f-a5da-50c9bd786f19-kube-api-access-sxqqb\") pod \"tigera-operator-7dcd859c48-xlr6g\" (UID: \"ce13cd6c-88bd-4a2f-a5da-50c9bd786f19\") " pod="tigera-operator/tigera-operator-7dcd859c48-xlr6g" Jan 28 00:23:36.799810 kubelet[3705]: I0128 00:23:36.799828 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce13cd6c-88bd-4a2f-a5da-50c9bd786f19-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xlr6g\" (UID: \"ce13cd6c-88bd-4a2f-a5da-50c9bd786f19\") " pod="tigera-operator/tigera-operator-7dcd859c48-xlr6g" Jan 28 00:23:36.826091 containerd[2198]: time="2026-01-28T00:23:36.826059458Z" level=info msg="connecting to shim 1e93442768e03b574dc155bd4e7f0c0815872d9222cbea045876f8276c9ac10a" address="unix:///run/containerd/s/65c6e9303a72962ae2c94e7fb48fd6f5c6651488d41b1638ca40c2f07dda9f5d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:23:36.842963 systemd[1]: Started cri-containerd-1e93442768e03b574dc155bd4e7f0c0815872d9222cbea045876f8276c9ac10a.scope - libcontainer container 1e93442768e03b574dc155bd4e7f0c0815872d9222cbea045876f8276c9ac10a. Jan 28 00:23:36.848000 audit: BPF prog-id=156 op=LOAD Jan 28 00:23:36.852949 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 00:23:36.852998 kernel: audit: type=1334 audit(1769559816.848:446): prog-id=156 op=LOAD Jan 28 00:23:36.855000 audit: BPF prog-id=157 op=LOAD Jan 28 00:23:36.861427 kernel: audit: type=1334 audit(1769559816.855:447): prog-id=157 op=LOAD Jan 28 00:23:36.855000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.877312 kernel: audit: type=1300 audit(1769559816.855:447): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.893483 kernel: audit: type=1327 audit(1769559816.855:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.855000 audit: BPF prog-id=157 op=UNLOAD Jan 28 00:23:36.898106 kernel: audit: type=1334 audit(1769559816.855:448): prog-id=157 op=UNLOAD Jan 28 00:23:36.855000 audit[3765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.913488 kernel: audit: type=1300 audit(1769559816.855:448): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.930868 kernel: audit: type=1327 audit(1769559816.855:448): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.855000 audit: BPF prog-id=158 op=LOAD Jan 28 00:23:36.936271 kernel: audit: type=1334 audit(1769559816.855:449): prog-id=158 op=LOAD Jan 28 00:23:36.855000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.952450 kernel: audit: type=1300 audit(1769559816.855:449): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.970671 kernel: audit: type=1327 audit(1769559816.855:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.855000 audit: BPF prog-id=159 op=LOAD Jan 28 00:23:36.855000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.855000 audit: BPF prog-id=159 op=UNLOAD Jan 28 00:23:36.855000 audit[3765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.855000 audit: BPF prog-id=158 op=UNLOAD Jan 28 00:23:36.855000 audit[3765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.855000 audit: BPF prog-id=160 op=LOAD Jan 28 00:23:36.855000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3755 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:36.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165393334343237363865303362353734646331353562643465376630 Jan 28 00:23:36.973262 containerd[2198]: time="2026-01-28T00:23:36.973236758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fj7sb,Uid:3bc44387-b934-4b82-9409-c6fbf7823b50,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e93442768e03b574dc155bd4e7f0c0815872d9222cbea045876f8276c9ac10a\"" Jan 28 00:23:36.976184 containerd[2198]: time="2026-01-28T00:23:36.976104658Z" level=info msg="CreateContainer within sandbox \"1e93442768e03b574dc155bd4e7f0c0815872d9222cbea045876f8276c9ac10a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:23:37.001477 containerd[2198]: time="2026-01-28T00:23:37.001443466Z" level=info msg="Container 3983dd32a52a275e8476ca151b58515a54a48a848587e47f01f435e7d2abb403: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:23:37.021630 containerd[2198]: time="2026-01-28T00:23:37.021572601Z" level=info msg="CreateContainer within sandbox \"1e93442768e03b574dc155bd4e7f0c0815872d9222cbea045876f8276c9ac10a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3983dd32a52a275e8476ca151b58515a54a48a848587e47f01f435e7d2abb403\"" Jan 28 00:23:37.022817 containerd[2198]: time="2026-01-28T00:23:37.022785929Z" level=info msg="StartContainer for \"3983dd32a52a275e8476ca151b58515a54a48a848587e47f01f435e7d2abb403\"" Jan 28 00:23:37.023950 containerd[2198]: time="2026-01-28T00:23:37.023926607Z" level=info msg="connecting to shim 3983dd32a52a275e8476ca151b58515a54a48a848587e47f01f435e7d2abb403" address="unix:///run/containerd/s/65c6e9303a72962ae2c94e7fb48fd6f5c6651488d41b1638ca40c2f07dda9f5d" protocol=ttrpc version=3 Jan 28 00:23:37.036948 systemd[1]: Started cri-containerd-3983dd32a52a275e8476ca151b58515a54a48a848587e47f01f435e7d2abb403.scope - libcontainer container 3983dd32a52a275e8476ca151b58515a54a48a848587e47f01f435e7d2abb403. Jan 28 00:23:37.070000 audit: BPF prog-id=161 op=LOAD Jan 28 00:23:37.070000 audit[3793]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3755 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339383364643332613532613237356538343736636131353162353835 Jan 28 00:23:37.070000 audit: BPF prog-id=162 op=LOAD Jan 28 00:23:37.070000 audit[3793]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3755 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339383364643332613532613237356538343736636131353162353835 Jan 28 00:23:37.070000 audit: BPF prog-id=162 op=UNLOAD Jan 28 00:23:37.070000 audit[3793]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3755 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339383364643332613532613237356538343736636131353162353835 Jan 28 00:23:37.070000 audit: BPF prog-id=161 op=UNLOAD Jan 28 00:23:37.070000 audit[3793]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3755 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339383364643332613532613237356538343736636131353162353835 Jan 28 00:23:37.071000 audit: BPF prog-id=163 op=LOAD Jan 28 00:23:37.071000 audit[3793]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3755 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339383364643332613532613237356538343736636131353162353835 Jan 28 00:23:37.084110 containerd[2198]: time="2026-01-28T00:23:37.084085652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xlr6g,Uid:ce13cd6c-88bd-4a2f-a5da-50c9bd786f19,Namespace:tigera-operator,Attempt:0,}" Jan 28 00:23:37.091723 containerd[2198]: time="2026-01-28T00:23:37.091662021Z" level=info msg="StartContainer for \"3983dd32a52a275e8476ca151b58515a54a48a848587e47f01f435e7d2abb403\" returns successfully" Jan 28 00:23:37.124444 containerd[2198]: time="2026-01-28T00:23:37.124405235Z" level=info msg="connecting to shim eb79fc45e4b49103f8c19f89033424bcc66be99f354e77adf8a468e2a1ff94aa" address="unix:///run/containerd/s/00fa0c473d98876ce6a3559aeb2235180f529b690147c2a5075374fdc875d761" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:23:37.140081 systemd[1]: Started cri-containerd-eb79fc45e4b49103f8c19f89033424bcc66be99f354e77adf8a468e2a1ff94aa.scope - libcontainer container eb79fc45e4b49103f8c19f89033424bcc66be99f354e77adf8a468e2a1ff94aa. Jan 28 00:23:37.147000 audit: BPF prog-id=164 op=LOAD Jan 28 00:23:37.147000 audit: BPF prog-id=165 op=LOAD Jan 28 00:23:37.147000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3835 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373966633435653462343931303366386331396638393033333432 Jan 28 00:23:37.147000 audit: BPF prog-id=165 op=UNLOAD Jan 28 00:23:37.147000 audit[3846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3835 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373966633435653462343931303366386331396638393033333432 Jan 28 00:23:37.148000 audit: BPF prog-id=166 op=LOAD Jan 28 00:23:37.148000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3835 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373966633435653462343931303366386331396638393033333432 Jan 28 00:23:37.148000 audit: BPF prog-id=167 op=LOAD Jan 28 00:23:37.148000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3835 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373966633435653462343931303366386331396638393033333432 Jan 28 00:23:37.148000 audit: BPF prog-id=167 op=UNLOAD Jan 28 00:23:37.148000 audit[3846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3835 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373966633435653462343931303366386331396638393033333432 Jan 28 00:23:37.148000 audit: BPF prog-id=166 op=UNLOAD Jan 28 00:23:37.148000 audit[3846]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3835 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373966633435653462343931303366386331396638393033333432 Jan 28 00:23:37.148000 audit: BPF prog-id=168 op=LOAD Jan 28 00:23:37.148000 audit[3846]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3835 pid=3846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562373966633435653462343931303366386331396638393033333432 Jan 28 00:23:37.174540 containerd[2198]: time="2026-01-28T00:23:37.174509765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xlr6g,Uid:ce13cd6c-88bd-4a2f-a5da-50c9bd786f19,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eb79fc45e4b49103f8c19f89033424bcc66be99f354e77adf8a468e2a1ff94aa\"" Jan 28 00:23:37.177272 containerd[2198]: time="2026-01-28T00:23:37.177251126Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 00:23:37.187000 audit[3903]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=3903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.187000 audit[3903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8e17240 a2=0 a3=1 items=0 ppid=3806 pid=3903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 00:23:37.187000 audit[3904]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=3904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.187000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc88e920 a2=0 a3=1 items=0 ppid=3806 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 00:23:37.188000 audit[3905]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=3905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.188000 audit[3905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff96e5e50 a2=0 a3=1 items=0 ppid=3806 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 00:23:37.189000 audit[3906]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=3906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.189000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea4595d0 a2=0 a3=1 items=0 ppid=3806 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 00:23:37.190000 audit[3907]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.190000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc83fdf0 a2=0 a3=1 items=0 ppid=3806 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.190000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 00:23:37.190000 audit[3908]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=3908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.190000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb589410 a2=0 a3=1 items=0 ppid=3806 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 00:23:37.292000 audit[3909]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.292000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffc920150 a2=0 a3=1 items=0 ppid=3806 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 00:23:37.294000 audit[3911]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.294000 audit[3911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffccc74310 a2=0 a3=1 items=0 ppid=3806 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.294000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 00:23:37.297000 audit[3914]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.297000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe4619b30 a2=0 a3=1 items=0 ppid=3806 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 00:23:37.298000 audit[3915]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.298000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc40f0a10 a2=0 a3=1 items=0 ppid=3806 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.298000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 00:23:37.300000 audit[3917]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.300000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc41317c0 a2=0 a3=1 items=0 ppid=3806 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.300000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 00:23:37.301000 audit[3918]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.301000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffcdfcfe0 a2=0 a3=1 items=0 ppid=3806 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 00:23:37.303000 audit[3920]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.303000 audit[3920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff6b04c60 a2=0 a3=1 items=0 ppid=3806 pid=3920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 00:23:37.305000 audit[3923]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.305000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffce977e30 a2=0 a3=1 items=0 ppid=3806 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.305000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 00:23:37.306000 audit[3924]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.306000 audit[3924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf896a90 a2=0 a3=1 items=0 ppid=3806 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 00:23:37.308000 audit[3926]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.308000 audit[3926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebc66e70 a2=0 a3=1 items=0 ppid=3806 pid=3926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.308000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 00:23:37.309000 audit[3927]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.309000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce393c00 a2=0 a3=1 items=0 ppid=3806 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 00:23:37.312000 audit[3929]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.312000 audit[3929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc53ff070 a2=0 a3=1 items=0 ppid=3806 pid=3929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 00:23:37.315000 audit[3932]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.315000 audit[3932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe092c8a0 a2=0 a3=1 items=0 ppid=3806 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 00:23:37.317000 audit[3935]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.317000 audit[3935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdd4512e0 a2=0 a3=1 items=0 ppid=3806 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 00:23:37.318000 audit[3936]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.318000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeed21c20 a2=0 a3=1 items=0 ppid=3806 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 00:23:37.320000 audit[3938]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.320000 audit[3938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe1965330 a2=0 a3=1 items=0 ppid=3806 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 00:23:37.323000 audit[3941]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.323000 audit[3941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe6b146c0 a2=0 a3=1 items=0 ppid=3806 pid=3941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 00:23:37.324000 audit[3942]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.324000 audit[3942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef9454f0 a2=0 a3=1 items=0 ppid=3806 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 00:23:37.325000 audit[3944]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:23:37.325000 audit[3944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffce546d00 a2=0 a3=1 items=0 ppid=3806 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 00:23:37.348000 audit[3950]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:37.348000 audit[3950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd59fa640 a2=0 a3=1 items=0 ppid=3806 pid=3950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:37.354000 audit[3950]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3950 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:37.354000 audit[3950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd59fa640 a2=0 a3=1 items=0 ppid=3806 pid=3950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:37.355000 audit[3955]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.355000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff995f220 a2=0 a3=1 items=0 ppid=3806 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 00:23:37.357000 audit[3957]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.357000 audit[3957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe4540920 a2=0 a3=1 items=0 ppid=3806 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.357000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 00:23:37.360000 audit[3960]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.360000 audit[3960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd712a580 a2=0 a3=1 items=0 ppid=3806 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 00:23:37.361000 audit[3961]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.361000 audit[3961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebd80590 a2=0 a3=1 items=0 ppid=3806 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 00:23:37.363000 audit[3963]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.363000 audit[3963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe67008a0 a2=0 a3=1 items=0 ppid=3806 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 00:23:37.364000 audit[3964]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.364000 audit[3964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda8864a0 a2=0 a3=1 items=0 ppid=3806 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 00:23:37.366000 audit[3966]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.366000 audit[3966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc13c4430 a2=0 a3=1 items=0 ppid=3806 pid=3966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 00:23:37.369000 audit[3969]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.369000 audit[3969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe509b6a0 a2=0 a3=1 items=0 ppid=3806 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.369000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 00:23:37.370000 audit[3970]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.370000 audit[3970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdfbd2fb0 a2=0 a3=1 items=0 ppid=3806 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 00:23:37.372000 audit[3972]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.372000 audit[3972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffda8bd340 a2=0 a3=1 items=0 ppid=3806 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 00:23:37.373000 audit[3973]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3973 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.373000 audit[3973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6e12b90 a2=0 a3=1 items=0 ppid=3806 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 00:23:37.375000 audit[3975]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.375000 audit[3975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd40c76f0 a2=0 a3=1 items=0 ppid=3806 pid=3975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 00:23:37.377000 audit[3978]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.377000 audit[3978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe6860a50 a2=0 a3=1 items=0 ppid=3806 pid=3978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.377000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 00:23:37.380000 audit[3981]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.380000 audit[3981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec4fad20 a2=0 a3=1 items=0 ppid=3806 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 00:23:37.381000 audit[3982]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.381000 audit[3982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffca193ae0 a2=0 a3=1 items=0 ppid=3806 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 00:23:37.383000 audit[3984]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.383000 audit[3984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff3ead960 a2=0 a3=1 items=0 ppid=3806 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 00:23:37.386000 audit[3987]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.386000 audit[3987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffedc477d0 a2=0 a3=1 items=0 ppid=3806 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 00:23:37.386000 audit[3988]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.386000 audit[3988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1e6e3d0 a2=0 a3=1 items=0 ppid=3806 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 00:23:37.388000 audit[3990]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.388000 audit[3990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd1463da0 a2=0 a3=1 items=0 ppid=3806 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.388000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 00:23:37.389000 audit[3991]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.389000 audit[3991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd7b64a0 a2=0 a3=1 items=0 ppid=3806 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 00:23:37.391000 audit[3993]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.391000 audit[3993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc7a48280 a2=0 a3=1 items=0 ppid=3806 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:23:37.394000 audit[3996]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:23:37.394000 audit[3996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc62f7ce0 a2=0 a3=1 items=0 ppid=3806 pid=3996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:23:37.397000 audit[3998]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 00:23:37.397000 audit[3998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd56f7f50 a2=0 a3=1 items=0 ppid=3806 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.397000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:37.398000 audit[3998]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 00:23:37.398000 audit[3998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd56f7f50 a2=0 a3=1 items=0 ppid=3806 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:37.398000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:37.423837 kubelet[3705]: I0128 00:23:37.423236 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fj7sb" podStartSLOduration=1.423224376 podStartE2EDuration="1.423224376s" podCreationTimestamp="2026-01-28 00:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:23:37.42310458 +0000 UTC m=+7.117701376" watchObservedRunningTime="2026-01-28 00:23:37.423224376 +0000 UTC m=+7.117821172" Jan 28 00:23:38.818074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount965219233.mount: Deactivated successfully. Jan 28 00:23:39.598608 containerd[2198]: time="2026-01-28T00:23:39.598557627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:39.601944 containerd[2198]: time="2026-01-28T00:23:39.601756370Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 28 00:23:39.604906 containerd[2198]: time="2026-01-28T00:23:39.604881992Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:39.609114 containerd[2198]: time="2026-01-28T00:23:39.609080803Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:39.609529 containerd[2198]: time="2026-01-28T00:23:39.609506030Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.432228416s" Jan 28 00:23:39.609649 containerd[2198]: time="2026-01-28T00:23:39.609586872Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 28 00:23:39.611812 containerd[2198]: time="2026-01-28T00:23:39.611552654Z" level=info msg="CreateContainer within sandbox \"eb79fc45e4b49103f8c19f89033424bcc66be99f354e77adf8a468e2a1ff94aa\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 00:23:39.633346 containerd[2198]: time="2026-01-28T00:23:39.631103061Z" level=info msg="Container 25998a847b0555c0baaf31c064c8a8ec5f23f28acc96023b75a9f93641fd611a: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:23:39.632134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553343741.mount: Deactivated successfully. Jan 28 00:23:39.649930 containerd[2198]: time="2026-01-28T00:23:39.649902311Z" level=info msg="CreateContainer within sandbox \"eb79fc45e4b49103f8c19f89033424bcc66be99f354e77adf8a468e2a1ff94aa\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"25998a847b0555c0baaf31c064c8a8ec5f23f28acc96023b75a9f93641fd611a\"" Jan 28 00:23:39.650698 containerd[2198]: time="2026-01-28T00:23:39.650658812Z" level=info msg="StartContainer for \"25998a847b0555c0baaf31c064c8a8ec5f23f28acc96023b75a9f93641fd611a\"" Jan 28 00:23:39.651621 containerd[2198]: time="2026-01-28T00:23:39.651576613Z" level=info msg="connecting to shim 25998a847b0555c0baaf31c064c8a8ec5f23f28acc96023b75a9f93641fd611a" address="unix:///run/containerd/s/00fa0c473d98876ce6a3559aeb2235180f529b690147c2a5075374fdc875d761" protocol=ttrpc version=3 Jan 28 00:23:39.670977 systemd[1]: Started cri-containerd-25998a847b0555c0baaf31c064c8a8ec5f23f28acc96023b75a9f93641fd611a.scope - libcontainer container 25998a847b0555c0baaf31c064c8a8ec5f23f28acc96023b75a9f93641fd611a. Jan 28 00:23:39.678000 audit: BPF prog-id=169 op=LOAD Jan 28 00:23:39.678000 audit: BPF prog-id=170 op=LOAD Jan 28 00:23:39.678000 audit[4008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3835 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:39.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393938613834376230353535633062616166333163303634633861 Jan 28 00:23:39.678000 audit: BPF prog-id=170 op=UNLOAD Jan 28 00:23:39.678000 audit[4008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3835 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:39.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393938613834376230353535633062616166333163303634633861 Jan 28 00:23:39.678000 audit: BPF prog-id=171 op=LOAD Jan 28 00:23:39.678000 audit[4008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3835 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:39.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393938613834376230353535633062616166333163303634633861 Jan 28 00:23:39.678000 audit: BPF prog-id=172 op=LOAD Jan 28 00:23:39.678000 audit[4008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3835 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:39.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393938613834376230353535633062616166333163303634633861 Jan 28 00:23:39.678000 audit: BPF prog-id=172 op=UNLOAD Jan 28 00:23:39.678000 audit[4008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3835 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:39.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393938613834376230353535633062616166333163303634633861 Jan 28 00:23:39.678000 audit: BPF prog-id=171 op=UNLOAD Jan 28 00:23:39.678000 audit[4008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3835 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:39.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393938613834376230353535633062616166333163303634633861 Jan 28 00:23:39.678000 audit: BPF prog-id=173 op=LOAD Jan 28 00:23:39.678000 audit[4008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3835 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:39.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235393938613834376230353535633062616166333163303634633861 Jan 28 00:23:39.697388 containerd[2198]: time="2026-01-28T00:23:39.697353794Z" level=info msg="StartContainer for \"25998a847b0555c0baaf31c064c8a8ec5f23f28acc96023b75a9f93641fd611a\" returns successfully" Jan 28 00:23:44.790173 sudo[2637]: pam_unix(sudo:session): session closed for user root Jan 28 00:23:44.809331 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 28 00:23:44.809520 kernel: audit: type=1106 audit(1769559824.789:526): pid=2637 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:44.789000 audit[2637]: USER_END pid=2637 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:44.789000 audit[2637]: CRED_DISP pid=2637 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:44.827260 kernel: audit: type=1104 audit(1769559824.789:527): pid=2637 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:23:44.860668 sshd[2636]: Connection closed by 10.200.16.10 port 52540 Jan 28 00:23:44.862562 sshd-session[2632]: pam_unix(sshd:session): session closed for user core Jan 28 00:23:44.863000 audit[2632]: USER_END pid=2632 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:44.885955 systemd[1]: sshd@6-10.200.20.33:22-10.200.16.10:52540.service: Deactivated successfully. Jan 28 00:23:44.887396 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 00:23:44.887563 systemd[1]: session-10.scope: Consumed 3.493s CPU time, 219.3M memory peak. Jan 28 00:23:44.863000 audit[2632]: CRED_DISP pid=2632 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:44.889894 systemd-logind[2162]: Session 10 logged out. Waiting for processes to exit. Jan 28 00:23:44.904019 kernel: audit: type=1106 audit(1769559824.863:528): pid=2632 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:44.904086 kernel: audit: type=1104 audit(1769559824.863:529): pid=2632 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:23:44.906023 systemd-logind[2162]: Removed session 10. Jan 28 00:23:44.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.33:22-10.200.16.10:52540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:44.922343 kernel: audit: type=1131 audit(1769559824.885:530): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.33:22-10.200.16.10:52540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:23:45.898148 kubelet[3705]: I0128 00:23:45.898086 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xlr6g" podStartSLOduration=7.463382817 podStartE2EDuration="9.898072091s" podCreationTimestamp="2026-01-28 00:23:36 +0000 UTC" firstStartedPulling="2026-01-28 00:23:37.175619626 +0000 UTC m=+6.870216422" lastFinishedPulling="2026-01-28 00:23:39.610308892 +0000 UTC m=+9.304905696" observedRunningTime="2026-01-28 00:23:40.434942172 +0000 UTC m=+10.129538976" watchObservedRunningTime="2026-01-28 00:23:45.898072091 +0000 UTC m=+15.592668887" Jan 28 00:23:46.410000 audit[4083]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:46.410000 audit[4083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc150f180 a2=0 a3=1 items=0 ppid=3806 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:46.445576 kernel: audit: type=1325 audit(1769559826.410:531): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:46.445648 kernel: audit: type=1300 audit(1769559826.410:531): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc150f180 a2=0 a3=1 items=0 ppid=3806 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:46.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:46.456882 kernel: audit: type=1327 audit(1769559826.410:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:46.425000 audit[4083]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:46.468824 kernel: audit: type=1325 audit(1769559826.425:532): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:46.425000 audit[4083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc150f180 a2=0 a3=1 items=0 ppid=3806 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:46.488119 kernel: audit: type=1300 audit(1769559826.425:532): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc150f180 a2=0 a3=1 items=0 ppid=3806 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:46.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:46.489000 audit[4085]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4085 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:46.489000 audit[4085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc6153f30 a2=0 a3=1 items=0 ppid=3806 pid=4085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:46.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:46.492000 audit[4085]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4085 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:46.492000 audit[4085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc6153f30 a2=0 a3=1 items=0 ppid=3806 pid=4085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:46.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:48.482000 audit[4087]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:48.482000 audit[4087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd76311e0 a2=0 a3=1 items=0 ppid=3806 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:48.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:48.489000 audit[4087]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:48.489000 audit[4087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd76311e0 a2=0 a3=1 items=0 ppid=3806 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:48.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:48.499000 audit[4089]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:48.499000 audit[4089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff45bd3f0 a2=0 a3=1 items=0 ppid=3806 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:48.499000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:48.505000 audit[4089]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4089 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:48.505000 audit[4089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff45bd3f0 a2=0 a3=1 items=0 ppid=3806 pid=4089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:48.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:49.516000 audit[4091]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:49.516000 audit[4091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc2650150 a2=0 a3=1 items=0 ppid=3806 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:49.516000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:49.520000 audit[4091]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4091 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:49.520000 audit[4091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2650150 a2=0 a3=1 items=0 ppid=3806 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:49.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:50.524000 audit[4095]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4095 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:50.528479 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 28 00:23:50.528548 kernel: audit: type=1325 audit(1769559830.524:541): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4095 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:50.524000 audit[4095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe122c190 a2=0 a3=1 items=0 ppid=3806 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:50.559979 kernel: audit: type=1300 audit(1769559830.524:541): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe122c190 a2=0 a3=1 items=0 ppid=3806 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:50.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:50.573982 kernel: audit: type=1327 audit(1769559830.524:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:50.539000 audit[4095]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4095 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:50.608635 kernel: audit: type=1325 audit(1769559830.539:542): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4095 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:50.539000 audit[4095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe122c190 a2=0 a3=1 items=0 ppid=3806 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:50.629887 kernel: audit: type=1300 audit(1769559830.539:542): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe122c190 a2=0 a3=1 items=0 ppid=3806 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:50.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:50.640719 kernel: audit: type=1327 audit(1769559830.539:542): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:50.647405 systemd[1]: Created slice kubepods-besteffort-pod0beb59b1_56cc_43da_81e7_b979c912ebed.slice - libcontainer container kubepods-besteffort-pod0beb59b1_56cc_43da_81e7_b979c912ebed.slice. Jan 28 00:23:50.712334 systemd[1]: Created slice kubepods-besteffort-pod6ac7c477_5f44_4669_a30c_04f898072d78.slice - libcontainer container kubepods-besteffort-pod6ac7c477_5f44_4669_a30c_04f898072d78.slice. Jan 28 00:23:50.784931 kubelet[3705]: I0128 00:23:50.784578 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0beb59b1-56cc-43da-81e7-b979c912ebed-tigera-ca-bundle\") pod \"calico-typha-77c56fc454-mggnt\" (UID: \"0beb59b1-56cc-43da-81e7-b979c912ebed\") " pod="calico-system/calico-typha-77c56fc454-mggnt" Jan 28 00:23:50.784931 kubelet[3705]: I0128 00:23:50.784621 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0beb59b1-56cc-43da-81e7-b979c912ebed-typha-certs\") pod \"calico-typha-77c56fc454-mggnt\" (UID: \"0beb59b1-56cc-43da-81e7-b979c912ebed\") " pod="calico-system/calico-typha-77c56fc454-mggnt" Jan 28 00:23:50.784931 kubelet[3705]: I0128 00:23:50.784634 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4kcp\" (UniqueName: \"kubernetes.io/projected/0beb59b1-56cc-43da-81e7-b979c912ebed-kube-api-access-p4kcp\") pod \"calico-typha-77c56fc454-mggnt\" (UID: \"0beb59b1-56cc-43da-81e7-b979c912ebed\") " pod="calico-system/calico-typha-77c56fc454-mggnt" Jan 28 00:23:50.833641 kubelet[3705]: E0128 00:23:50.833601 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:23:50.885345 kubelet[3705]: I0128 00:23:50.885316 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhpj\" (UniqueName: \"kubernetes.io/projected/6ac7c477-5f44-4669-a30c-04f898072d78-kube-api-access-kjhpj\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.885524 kubelet[3705]: I0128 00:23:50.885473 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-cni-bin-dir\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.885659 kubelet[3705]: I0128 00:23:50.885566 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-var-lib-calico\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.885659 kubelet[3705]: I0128 00:23:50.885616 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-cni-net-dir\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.885659 kubelet[3705]: I0128 00:23:50.885628 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6ac7c477-5f44-4669-a30c-04f898072d78-node-certs\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.885659 kubelet[3705]: I0128 00:23:50.885640 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-policysync\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.885972 kubelet[3705]: I0128 00:23:50.885757 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-flexvol-driver-host\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.886237 kubelet[3705]: I0128 00:23:50.885778 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-lib-modules\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.886237 kubelet[3705]: I0128 00:23:50.886040 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ac7c477-5f44-4669-a30c-04f898072d78-tigera-ca-bundle\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.886237 kubelet[3705]: I0128 00:23:50.886058 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-xtables-lock\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.886237 kubelet[3705]: I0128 00:23:50.886069 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-cni-log-dir\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.886237 kubelet[3705]: I0128 00:23:50.886081 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6ac7c477-5f44-4669-a30c-04f898072d78-var-run-calico\") pod \"calico-node-gtwbg\" (UID: \"6ac7c477-5f44-4669-a30c-04f898072d78\") " pod="calico-system/calico-node-gtwbg" Jan 28 00:23:50.952214 containerd[2198]: time="2026-01-28T00:23:50.952174668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77c56fc454-mggnt,Uid:0beb59b1-56cc-43da-81e7-b979c912ebed,Namespace:calico-system,Attempt:0,}" Jan 28 00:23:50.987478 kubelet[3705]: I0128 00:23:50.987363 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6d08a70-95be-4168-8a2f-3e965a6278e2-kubelet-dir\") pod \"csi-node-driver-w2sfv\" (UID: \"f6d08a70-95be-4168-8a2f-3e965a6278e2\") " pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:23:50.987782 kubelet[3705]: I0128 00:23:50.987761 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk8cf\" (UniqueName: \"kubernetes.io/projected/f6d08a70-95be-4168-8a2f-3e965a6278e2-kube-api-access-pk8cf\") pod \"csi-node-driver-w2sfv\" (UID: \"f6d08a70-95be-4168-8a2f-3e965a6278e2\") " pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:23:50.988223 kubelet[3705]: I0128 00:23:50.988109 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f6d08a70-95be-4168-8a2f-3e965a6278e2-varrun\") pod \"csi-node-driver-w2sfv\" (UID: \"f6d08a70-95be-4168-8a2f-3e965a6278e2\") " pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:23:50.988691 kubelet[3705]: I0128 00:23:50.988571 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6d08a70-95be-4168-8a2f-3e965a6278e2-registration-dir\") pod \"csi-node-driver-w2sfv\" (UID: \"f6d08a70-95be-4168-8a2f-3e965a6278e2\") " pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:23:50.989909 kubelet[3705]: I0128 00:23:50.989879 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6d08a70-95be-4168-8a2f-3e965a6278e2-socket-dir\") pod \"csi-node-driver-w2sfv\" (UID: \"f6d08a70-95be-4168-8a2f-3e965a6278e2\") " pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:23:50.990267 kubelet[3705]: E0128 00:23:50.990196 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:50.990267 kubelet[3705]: W0128 00:23:50.990212 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:50.990267 kubelet[3705]: E0128 00:23:50.990231 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:50.990600 kubelet[3705]: E0128 00:23:50.990588 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:50.991583 kubelet[3705]: W0128 00:23:50.991501 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:50.992074 kubelet[3705]: E0128 00:23:50.992057 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:50.992771 kubelet[3705]: E0128 00:23:50.992754 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:50.992952 kubelet[3705]: W0128 00:23:50.992870 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:50.992952 kubelet[3705]: E0128 00:23:50.992897 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:50.994289 kubelet[3705]: E0128 00:23:50.994253 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:50.994662 kubelet[3705]: W0128 00:23:50.994370 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:50.995130 kubelet[3705]: E0128 00:23:50.994887 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:50.999364 kubelet[3705]: E0128 00:23:50.998865 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:50.999364 kubelet[3705]: W0128 00:23:50.999086 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:50.999364 kubelet[3705]: E0128 00:23:50.999103 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.000346 kubelet[3705]: E0128 00:23:51.000226 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.000346 kubelet[3705]: W0128 00:23:51.000241 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.000346 kubelet[3705]: E0128 00:23:51.000256 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.001642 kubelet[3705]: E0128 00:23:51.001626 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.001868 kubelet[3705]: W0128 00:23:51.001812 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.001868 kubelet[3705]: E0128 00:23:51.001845 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.006879 containerd[2198]: time="2026-01-28T00:23:51.006785017Z" level=info msg="connecting to shim 036dd409ec7cce668eff59ac0afe56313093de070a25a4b0fe7341a4aac3063b" address="unix:///run/containerd/s/5b0137756b7ba28f7fa433ad7f74faf355eeda9e6a64ced8aa340fedf875894a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:23:51.014359 kubelet[3705]: E0128 00:23:51.014339 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.014407 kubelet[3705]: W0128 00:23:51.014367 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.014407 kubelet[3705]: E0128 00:23:51.014379 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.017286 containerd[2198]: time="2026-01-28T00:23:51.017231153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gtwbg,Uid:6ac7c477-5f44-4669-a30c-04f898072d78,Namespace:calico-system,Attempt:0,}" Jan 28 00:23:51.032967 systemd[1]: Started cri-containerd-036dd409ec7cce668eff59ac0afe56313093de070a25a4b0fe7341a4aac3063b.scope - libcontainer container 036dd409ec7cce668eff59ac0afe56313093de070a25a4b0fe7341a4aac3063b. Jan 28 00:23:51.044000 audit: BPF prog-id=174 op=LOAD Jan 28 00:23:51.049000 audit: BPF prog-id=175 op=LOAD Jan 28 00:23:51.054582 kernel: audit: type=1334 audit(1769559831.044:543): prog-id=174 op=LOAD Jan 28 00:23:51.054636 kernel: audit: type=1334 audit(1769559831.049:544): prog-id=175 op=LOAD Jan 28 00:23:51.049000 audit[4129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4116 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.073911 kernel: audit: type=1300 audit(1769559831.049:544): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4116 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033366464343039656337636365363638656666353961633061666535 Jan 28 00:23:51.093231 kernel: audit: type=1327 audit(1769559831.049:544): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033366464343039656337636365363638656666353961633061666535 Jan 28 00:23:51.049000 audit: BPF prog-id=175 op=UNLOAD Jan 28 00:23:51.049000 audit[4129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033366464343039656337636365363638656666353961633061666535 Jan 28 00:23:51.049000 audit: BPF prog-id=176 op=LOAD Jan 28 00:23:51.049000 audit[4129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4116 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033366464343039656337636365363638656666353961633061666535 Jan 28 00:23:51.054000 audit: BPF prog-id=177 op=LOAD Jan 28 00:23:51.054000 audit[4129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4116 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033366464343039656337636365363638656666353961633061666535 Jan 28 00:23:51.054000 audit: BPF prog-id=177 op=UNLOAD Jan 28 00:23:51.054000 audit[4129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033366464343039656337636365363638656666353961633061666535 Jan 28 00:23:51.054000 audit: BPF prog-id=176 op=UNLOAD Jan 28 00:23:51.095012 kubelet[3705]: E0128 00:23:51.094982 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.095012 kubelet[3705]: W0128 00:23:51.094999 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.095129 kubelet[3705]: E0128 00:23:51.095017 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.054000 audit[4129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033366464343039656337636365363638656666353961633061666535 Jan 28 00:23:51.054000 audit: BPF prog-id=178 op=LOAD Jan 28 00:23:51.054000 audit[4129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4116 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033366464343039656337636365363638656666353961633061666535 Jan 28 00:23:51.096047 kubelet[3705]: E0128 00:23:51.096031 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.096047 kubelet[3705]: W0128 00:23:51.096042 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.096047 kubelet[3705]: E0128 00:23:51.096056 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.096621 kubelet[3705]: E0128 00:23:51.096603 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.096621 kubelet[3705]: W0128 00:23:51.096618 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.096763 kubelet[3705]: E0128 00:23:51.096674 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.096855 kubelet[3705]: E0128 00:23:51.096842 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.096855 kubelet[3705]: W0128 00:23:51.096852 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.097075 kubelet[3705]: E0128 00:23:51.097031 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.097075 kubelet[3705]: W0128 00:23:51.097043 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.097075 kubelet[3705]: E0128 00:23:51.097054 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.097283 kubelet[3705]: E0128 00:23:51.097270 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.097283 kubelet[3705]: W0128 00:23:51.097281 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.097464 kubelet[3705]: E0128 00:23:51.097290 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.097464 kubelet[3705]: E0128 00:23:51.097355 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.097464 kubelet[3705]: E0128 00:23:51.097390 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.097464 kubelet[3705]: W0128 00:23:51.097395 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.097464 kubelet[3705]: E0128 00:23:51.097440 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.097835 kubelet[3705]: E0128 00:23:51.097771 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.097835 kubelet[3705]: W0128 00:23:51.097784 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.097835 kubelet[3705]: E0128 00:23:51.097797 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.098597 kubelet[3705]: E0128 00:23:51.098548 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.098917 kubelet[3705]: W0128 00:23:51.098756 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.098917 kubelet[3705]: E0128 00:23:51.098783 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.099310 kubelet[3705]: E0128 00:23:51.099240 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.099492 kubelet[3705]: W0128 00:23:51.099370 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.099538 kubelet[3705]: E0128 00:23:51.099491 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.100541 kubelet[3705]: E0128 00:23:51.100435 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.100541 kubelet[3705]: W0128 00:23:51.100450 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.100541 kubelet[3705]: E0128 00:23:51.100486 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.101397 kubelet[3705]: E0128 00:23:51.101382 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.101705 kubelet[3705]: W0128 00:23:51.101534 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.101705 kubelet[3705]: E0128 00:23:51.101573 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.103018 kubelet[3705]: E0128 00:23:51.103004 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.103153 kubelet[3705]: W0128 00:23:51.103100 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.103153 kubelet[3705]: E0128 00:23:51.103132 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.103419 kubelet[3705]: E0128 00:23:51.103361 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.103419 kubelet[3705]: W0128 00:23:51.103383 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.103419 kubelet[3705]: E0128 00:23:51.103407 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.103666 kubelet[3705]: E0128 00:23:51.103656 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.103799 kubelet[3705]: W0128 00:23:51.103724 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.103799 kubelet[3705]: E0128 00:23:51.103754 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.105061 kubelet[3705]: E0128 00:23:51.105014 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.105061 kubelet[3705]: W0128 00:23:51.105026 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.105061 kubelet[3705]: E0128 00:23:51.105054 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.105838 kubelet[3705]: E0128 00:23:51.105794 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.106048 kubelet[3705]: W0128 00:23:51.105924 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.106236 kubelet[3705]: E0128 00:23:51.106218 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.106498 kubelet[3705]: E0128 00:23:51.106355 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.106498 kubelet[3705]: W0128 00:23:51.106366 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.106686 kubelet[3705]: E0128 00:23:51.106670 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.107471 kubelet[3705]: E0128 00:23:51.107189 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.107711 kubelet[3705]: W0128 00:23:51.107640 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.107975 kubelet[3705]: E0128 00:23:51.107869 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.108972 kubelet[3705]: E0128 00:23:51.108719 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.109305 kubelet[3705]: W0128 00:23:51.109076 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.109857 kubelet[3705]: E0128 00:23:51.109843 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.109998 kubelet[3705]: W0128 00:23:51.109921 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.111071 kubelet[3705]: E0128 00:23:51.110595 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.111071 kubelet[3705]: W0128 00:23:51.110694 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.111071 kubelet[3705]: E0128 00:23:51.110709 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.112921 kubelet[3705]: E0128 00:23:51.112886 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.112921 kubelet[3705]: W0128 00:23:51.112901 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.113120 kubelet[3705]: E0128 00:23:51.113008 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.113120 kubelet[3705]: E0128 00:23:51.113034 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.113120 kubelet[3705]: E0128 00:23:51.113042 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.113682 kubelet[3705]: E0128 00:23:51.113667 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.113969 kubelet[3705]: W0128 00:23:51.113957 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.114050 kubelet[3705]: E0128 00:23:51.114041 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.114258 containerd[2198]: time="2026-01-28T00:23:51.114232044Z" level=info msg="connecting to shim 25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4" address="unix:///run/containerd/s/fe23a4f70d666a4ce463b69b07f261c5bc175748e81b49f7a5d5cbc7a4698e96" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:23:51.114482 kubelet[3705]: E0128 00:23:51.114464 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.114482 kubelet[3705]: W0128 00:23:51.114478 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.114560 kubelet[3705]: E0128 00:23:51.114494 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.114691 kubelet[3705]: E0128 00:23:51.114671 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:51.114691 kubelet[3705]: W0128 00:23:51.114680 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:51.114691 kubelet[3705]: E0128 00:23:51.114689 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:51.133829 containerd[2198]: time="2026-01-28T00:23:51.133787270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77c56fc454-mggnt,Uid:0beb59b1-56cc-43da-81e7-b979c912ebed,Namespace:calico-system,Attempt:0,} returns sandbox id \"036dd409ec7cce668eff59ac0afe56313093de070a25a4b0fe7341a4aac3063b\"" Jan 28 00:23:51.135914 containerd[2198]: time="2026-01-28T00:23:51.135708971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 00:23:51.144973 systemd[1]: Started cri-containerd-25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4.scope - libcontainer container 25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4. Jan 28 00:23:51.151000 audit: BPF prog-id=179 op=LOAD Jan 28 00:23:51.151000 audit: BPF prog-id=180 op=LOAD Jan 28 00:23:51.151000 audit[4206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4186 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613865643762613566623766343965623461653436366139396264 Jan 28 00:23:51.151000 audit: BPF prog-id=180 op=UNLOAD Jan 28 00:23:51.151000 audit[4206]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613865643762613566623766343965623461653436366139396264 Jan 28 00:23:51.151000 audit: BPF prog-id=181 op=LOAD Jan 28 00:23:51.151000 audit[4206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4186 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613865643762613566623766343965623461653436366139396264 Jan 28 00:23:51.151000 audit: BPF prog-id=182 op=LOAD Jan 28 00:23:51.151000 audit[4206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4186 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613865643762613566623766343965623461653436366139396264 Jan 28 00:23:51.151000 audit: BPF prog-id=182 op=UNLOAD Jan 28 00:23:51.151000 audit[4206]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613865643762613566623766343965623461653436366139396264 Jan 28 00:23:51.151000 audit: BPF prog-id=181 op=UNLOAD Jan 28 00:23:51.151000 audit[4206]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613865643762613566623766343965623461653436366139396264 Jan 28 00:23:51.151000 audit: BPF prog-id=183 op=LOAD Jan 28 00:23:51.151000 audit[4206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4186 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613865643762613566623766343965623461653436366139396264 Jan 28 00:23:51.165529 containerd[2198]: time="2026-01-28T00:23:51.165499270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gtwbg,Uid:6ac7c477-5f44-4669-a30c-04f898072d78,Namespace:calico-system,Attempt:0,} returns sandbox id \"25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4\"" Jan 28 00:23:51.583000 audit[4234]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:51.583000 audit[4234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffff8fdd80 a2=0 a3=1 items=0 ppid=3806 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:51.587000 audit[4234]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4234 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:51.587000 audit[4234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff8fdd80 a2=0 a3=1 items=0 ppid=3806 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:51.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:52.361494 kubelet[3705]: E0128 00:23:52.361135 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:23:52.522652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1256739671.mount: Deactivated successfully. Jan 28 00:23:53.391621 containerd[2198]: time="2026-01-28T00:23:53.391577265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:53.395251 containerd[2198]: time="2026-01-28T00:23:53.395215661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 28 00:23:53.398413 containerd[2198]: time="2026-01-28T00:23:53.398385725Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:53.405089 containerd[2198]: time="2026-01-28T00:23:53.405066092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:53.405591 containerd[2198]: time="2026-01-28T00:23:53.405441671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.269705819s" Jan 28 00:23:53.405591 containerd[2198]: time="2026-01-28T00:23:53.405539401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 28 00:23:53.407319 containerd[2198]: time="2026-01-28T00:23:53.407154598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 00:23:53.424318 containerd[2198]: time="2026-01-28T00:23:53.423958044Z" level=info msg="CreateContainer within sandbox \"036dd409ec7cce668eff59ac0afe56313093de070a25a4b0fe7341a4aac3063b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 00:23:53.452024 containerd[2198]: time="2026-01-28T00:23:53.451993759Z" level=info msg="Container 60c9698f1817f3950859c137faabf3de698c80bbb7350006e0c14cd6db263727: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:23:53.454481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621933116.mount: Deactivated successfully. Jan 28 00:23:53.471458 containerd[2198]: time="2026-01-28T00:23:53.471426389Z" level=info msg="CreateContainer within sandbox \"036dd409ec7cce668eff59ac0afe56313093de070a25a4b0fe7341a4aac3063b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"60c9698f1817f3950859c137faabf3de698c80bbb7350006e0c14cd6db263727\"" Jan 28 00:23:53.472723 containerd[2198]: time="2026-01-28T00:23:53.471791407Z" level=info msg="StartContainer for \"60c9698f1817f3950859c137faabf3de698c80bbb7350006e0c14cd6db263727\"" Jan 28 00:23:53.472723 containerd[2198]: time="2026-01-28T00:23:53.472540932Z" level=info msg="connecting to shim 60c9698f1817f3950859c137faabf3de698c80bbb7350006e0c14cd6db263727" address="unix:///run/containerd/s/5b0137756b7ba28f7fa433ad7f74faf355eeda9e6a64ced8aa340fedf875894a" protocol=ttrpc version=3 Jan 28 00:23:53.490961 systemd[1]: Started cri-containerd-60c9698f1817f3950859c137faabf3de698c80bbb7350006e0c14cd6db263727.scope - libcontainer container 60c9698f1817f3950859c137faabf3de698c80bbb7350006e0c14cd6db263727. Jan 28 00:23:53.498000 audit: BPF prog-id=184 op=LOAD Jan 28 00:23:53.498000 audit: BPF prog-id=185 op=LOAD Jan 28 00:23:53.498000 audit[4245]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4116 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:53.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633936393866313831376633393530383539633133376661616266 Jan 28 00:23:53.498000 audit: BPF prog-id=185 op=UNLOAD Jan 28 00:23:53.498000 audit[4245]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:53.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633936393866313831376633393530383539633133376661616266 Jan 28 00:23:53.498000 audit: BPF prog-id=186 op=LOAD Jan 28 00:23:53.498000 audit[4245]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4116 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:53.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633936393866313831376633393530383539633133376661616266 Jan 28 00:23:53.498000 audit: BPF prog-id=187 op=LOAD Jan 28 00:23:53.498000 audit[4245]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4116 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:53.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633936393866313831376633393530383539633133376661616266 Jan 28 00:23:53.498000 audit: BPF prog-id=187 op=UNLOAD Jan 28 00:23:53.498000 audit[4245]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:53.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633936393866313831376633393530383539633133376661616266 Jan 28 00:23:53.498000 audit: BPF prog-id=186 op=UNLOAD Jan 28 00:23:53.498000 audit[4245]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4116 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:53.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633936393866313831376633393530383539633133376661616266 Jan 28 00:23:53.498000 audit: BPF prog-id=188 op=LOAD Jan 28 00:23:53.498000 audit[4245]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4116 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:53.498000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630633936393866313831376633393530383539633133376661616266 Jan 28 00:23:53.523787 containerd[2198]: time="2026-01-28T00:23:53.523760740Z" level=info msg="StartContainer for \"60c9698f1817f3950859c137faabf3de698c80bbb7350006e0c14cd6db263727\" returns successfully" Jan 28 00:23:54.361527 kubelet[3705]: E0128 00:23:54.361484 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:23:54.494215 kubelet[3705]: I0128 00:23:54.493862 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77c56fc454-mggnt" podStartSLOduration=2.222586337 podStartE2EDuration="4.493707083s" podCreationTimestamp="2026-01-28 00:23:50 +0000 UTC" firstStartedPulling="2026-01-28 00:23:51.135499893 +0000 UTC m=+20.830096689" lastFinishedPulling="2026-01-28 00:23:53.406620639 +0000 UTC m=+23.101217435" observedRunningTime="2026-01-28 00:23:54.493299232 +0000 UTC m=+24.187896028" watchObservedRunningTime="2026-01-28 00:23:54.493707083 +0000 UTC m=+24.188303879" Jan 28 00:23:54.515015 kubelet[3705]: E0128 00:23:54.514987 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.515015 kubelet[3705]: W0128 00:23:54.515009 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.515141 kubelet[3705]: E0128 00:23:54.515026 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.515259 kubelet[3705]: E0128 00:23:54.515240 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.515283 kubelet[3705]: W0128 00:23:54.515254 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.515298 kubelet[3705]: E0128 00:23:54.515286 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.515446 kubelet[3705]: E0128 00:23:54.515432 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.515446 kubelet[3705]: W0128 00:23:54.515442 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.515492 kubelet[3705]: E0128 00:23:54.515452 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.515592 kubelet[3705]: E0128 00:23:54.515579 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.515592 kubelet[3705]: W0128 00:23:54.515589 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.515623 kubelet[3705]: E0128 00:23:54.515596 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.515745 kubelet[3705]: E0128 00:23:54.515731 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.515745 kubelet[3705]: W0128 00:23:54.515745 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.515792 kubelet[3705]: E0128 00:23:54.515752 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.515883 kubelet[3705]: E0128 00:23:54.515871 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.515883 kubelet[3705]: W0128 00:23:54.515880 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.515927 kubelet[3705]: E0128 00:23:54.515886 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516003 kubelet[3705]: E0128 00:23:54.515992 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516003 kubelet[3705]: W0128 00:23:54.516001 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.516035 kubelet[3705]: E0128 00:23:54.516007 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516139 kubelet[3705]: E0128 00:23:54.516124 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516139 kubelet[3705]: W0128 00:23:54.516137 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.516186 kubelet[3705]: E0128 00:23:54.516143 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516267 kubelet[3705]: E0128 00:23:54.516254 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516267 kubelet[3705]: W0128 00:23:54.516263 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.516295 kubelet[3705]: E0128 00:23:54.516269 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516405 kubelet[3705]: E0128 00:23:54.516393 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516405 kubelet[3705]: W0128 00:23:54.516401 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.516446 kubelet[3705]: E0128 00:23:54.516407 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516521 kubelet[3705]: E0128 00:23:54.516508 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516521 kubelet[3705]: W0128 00:23:54.516517 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.516550 kubelet[3705]: E0128 00:23:54.516522 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516626 kubelet[3705]: E0128 00:23:54.516615 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516626 kubelet[3705]: W0128 00:23:54.516623 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.516652 kubelet[3705]: E0128 00:23:54.516630 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516764 kubelet[3705]: E0128 00:23:54.516752 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516764 kubelet[3705]: W0128 00:23:54.516760 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.516806 kubelet[3705]: E0128 00:23:54.516766 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516884 kubelet[3705]: E0128 00:23:54.516871 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516884 kubelet[3705]: W0128 00:23:54.516881 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.516917 kubelet[3705]: E0128 00:23:54.516886 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.516980 kubelet[3705]: E0128 00:23:54.516969 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.516980 kubelet[3705]: W0128 00:23:54.516977 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.517008 kubelet[3705]: E0128 00:23:54.516982 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.525343 kubelet[3705]: E0128 00:23:54.525323 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.525343 kubelet[3705]: W0128 00:23:54.525338 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.525420 kubelet[3705]: E0128 00:23:54.525349 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.525500 kubelet[3705]: E0128 00:23:54.525483 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.525500 kubelet[3705]: W0128 00:23:54.525495 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.525500 kubelet[3705]: E0128 00:23:54.525501 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.525748 kubelet[3705]: E0128 00:23:54.525733 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.525796 kubelet[3705]: W0128 00:23:54.525786 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.525867 kubelet[3705]: E0128 00:23:54.525855 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.526040 kubelet[3705]: E0128 00:23:54.526016 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.526040 kubelet[3705]: W0128 00:23:54.526027 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.526040 kubelet[3705]: E0128 00:23:54.526038 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.526179 kubelet[3705]: E0128 00:23:54.526167 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.526179 kubelet[3705]: W0128 00:23:54.526176 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.526226 kubelet[3705]: E0128 00:23:54.526185 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.526295 kubelet[3705]: E0128 00:23:54.526281 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.526295 kubelet[3705]: W0128 00:23:54.526290 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.526295 kubelet[3705]: E0128 00:23:54.526296 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.526402 kubelet[3705]: E0128 00:23:54.526389 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.526402 kubelet[3705]: W0128 00:23:54.526398 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.526402 kubelet[3705]: E0128 00:23:54.526407 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.526613 kubelet[3705]: E0128 00:23:54.526601 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.526658 kubelet[3705]: W0128 00:23:54.526648 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.526716 kubelet[3705]: E0128 00:23:54.526709 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.527038 kubelet[3705]: E0128 00:23:54.526943 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.527038 kubelet[3705]: W0128 00:23:54.526955 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.527038 kubelet[3705]: E0128 00:23:54.526970 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.527223 kubelet[3705]: E0128 00:23:54.527213 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.527348 kubelet[3705]: W0128 00:23:54.527269 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.527348 kubelet[3705]: E0128 00:23:54.527295 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.527449 kubelet[3705]: E0128 00:23:54.527439 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.527492 kubelet[3705]: W0128 00:23:54.527483 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.527543 kubelet[3705]: E0128 00:23:54.527530 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.527807 kubelet[3705]: E0128 00:23:54.527716 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.527807 kubelet[3705]: W0128 00:23:54.527728 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.527807 kubelet[3705]: E0128 00:23:54.527746 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.527998 kubelet[3705]: E0128 00:23:54.527988 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.528054 kubelet[3705]: W0128 00:23:54.528045 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.528105 kubelet[3705]: E0128 00:23:54.528097 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.528233 kubelet[3705]: E0128 00:23:54.528213 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.528233 kubelet[3705]: W0128 00:23:54.528226 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.528233 kubelet[3705]: E0128 00:23:54.528235 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.528362 kubelet[3705]: E0128 00:23:54.528347 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.528362 kubelet[3705]: W0128 00:23:54.528358 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.528404 kubelet[3705]: E0128 00:23:54.528364 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.528597 kubelet[3705]: E0128 00:23:54.528583 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.528597 kubelet[3705]: W0128 00:23:54.528594 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.528662 kubelet[3705]: E0128 00:23:54.528608 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.528736 kubelet[3705]: E0128 00:23:54.528724 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.528736 kubelet[3705]: W0128 00:23:54.528732 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.528783 kubelet[3705]: E0128 00:23:54.528746 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.528900 kubelet[3705]: E0128 00:23:54.528888 3705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:23:54.528900 kubelet[3705]: W0128 00:23:54.528897 3705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:23:54.528947 kubelet[3705]: E0128 00:23:54.528904 3705 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:23:54.781885 containerd[2198]: time="2026-01-28T00:23:54.781341657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:54.784473 containerd[2198]: time="2026-01-28T00:23:54.784436662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 28 00:23:54.787779 containerd[2198]: time="2026-01-28T00:23:54.787754689Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:54.791838 containerd[2198]: time="2026-01-28T00:23:54.791721574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:54.792279 containerd[2198]: time="2026-01-28T00:23:54.792178379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.384999388s" Jan 28 00:23:54.792279 containerd[2198]: time="2026-01-28T00:23:54.792205203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 28 00:23:54.794681 containerd[2198]: time="2026-01-28T00:23:54.794656079Z" level=info msg="CreateContainer within sandbox \"25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 00:23:54.828853 containerd[2198]: time="2026-01-28T00:23:54.828210545Z" level=info msg="Container 262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:23:54.868335 containerd[2198]: time="2026-01-28T00:23:54.868238166Z" level=info msg="CreateContainer within sandbox \"25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05\"" Jan 28 00:23:54.869903 containerd[2198]: time="2026-01-28T00:23:54.869333164Z" level=info msg="StartContainer for \"262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05\"" Jan 28 00:23:54.870581 containerd[2198]: time="2026-01-28T00:23:54.870546413Z" level=info msg="connecting to shim 262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05" address="unix:///run/containerd/s/fe23a4f70d666a4ce463b69b07f261c5bc175748e81b49f7a5d5cbc7a4698e96" protocol=ttrpc version=3 Jan 28 00:23:54.888019 systemd[1]: Started cri-containerd-262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05.scope - libcontainer container 262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05. Jan 28 00:23:54.936000 audit: BPF prog-id=189 op=LOAD Jan 28 00:23:54.936000 audit[4321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236326266306539623330643930616463633336633831363935656239 Jan 28 00:23:54.936000 audit: BPF prog-id=190 op=LOAD Jan 28 00:23:54.936000 audit[4321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236326266306539623330643930616463633336633831363935656239 Jan 28 00:23:54.936000 audit: BPF prog-id=190 op=UNLOAD Jan 28 00:23:54.936000 audit[4321]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236326266306539623330643930616463633336633831363935656239 Jan 28 00:23:54.936000 audit: BPF prog-id=189 op=UNLOAD Jan 28 00:23:54.936000 audit[4321]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:54.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236326266306539623330643930616463633336633831363935656239 Jan 28 00:23:54.937000 audit: BPF prog-id=191 op=LOAD Jan 28 00:23:54.937000 audit[4321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=4186 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:54.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236326266306539623330643930616463633336633831363935656239 Jan 28 00:23:54.960769 containerd[2198]: time="2026-01-28T00:23:54.960720813Z" level=info msg="StartContainer for \"262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05\" returns successfully" Jan 28 00:23:54.966518 systemd[1]: cri-containerd-262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05.scope: Deactivated successfully. Jan 28 00:23:54.969000 audit: BPF prog-id=191 op=UNLOAD Jan 28 00:23:54.970480 containerd[2198]: time="2026-01-28T00:23:54.969657235Z" level=info msg="received container exit event container_id:\"262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05\" id:\"262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05\" pid:4334 exited_at:{seconds:1769559834 nanos:969313049}" Jan 28 00:23:54.987780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-262bf0e9b30d90adcc36c81695eb9bae88d0736deb0f4156c43e3255d4b74a05-rootfs.mount: Deactivated successfully. Jan 28 00:23:55.064000 audit[4372]: NETFILTER_CFG table=filter:122 family=2 entries=21 op=nft_register_rule pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:55.064000 audit[4372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe52a6da0 a2=0 a3=1 items=0 ppid=3806 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:55.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:55.067000 audit[4372]: NETFILTER_CFG table=nat:123 family=2 entries=19 op=nft_register_chain pid=4372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:23:55.067000 audit[4372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe52a6da0 a2=0 a3=1 items=0 ppid=3806 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:55.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:23:56.362207 kubelet[3705]: E0128 00:23:56.362167 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:23:56.489867 containerd[2198]: time="2026-01-28T00:23:56.489828463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 00:23:58.361235 kubelet[3705]: E0128 00:23:58.361191 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:23:59.366332 containerd[2198]: time="2026-01-28T00:23:59.365882874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:59.369883 containerd[2198]: time="2026-01-28T00:23:59.369810056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921989" Jan 28 00:23:59.373179 containerd[2198]: time="2026-01-28T00:23:59.373159407Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:59.377919 containerd[2198]: time="2026-01-28T00:23:59.377889015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:23:59.378226 containerd[2198]: time="2026-01-28T00:23:59.378206358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.888338678s" Jan 28 00:23:59.378316 containerd[2198]: time="2026-01-28T00:23:59.378302817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 28 00:23:59.380466 containerd[2198]: time="2026-01-28T00:23:59.380439363Z" level=info msg="CreateContainer within sandbox \"25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 00:23:59.408840 containerd[2198]: time="2026-01-28T00:23:59.407854877Z" level=info msg="Container a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:23:59.427619 containerd[2198]: time="2026-01-28T00:23:59.427583457Z" level=info msg="CreateContainer within sandbox \"25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26\"" Jan 28 00:23:59.428780 containerd[2198]: time="2026-01-28T00:23:59.428747716Z" level=info msg="StartContainer for \"a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26\"" Jan 28 00:23:59.429785 containerd[2198]: time="2026-01-28T00:23:59.429757236Z" level=info msg="connecting to shim a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26" address="unix:///run/containerd/s/fe23a4f70d666a4ce463b69b07f261c5bc175748e81b49f7a5d5cbc7a4698e96" protocol=ttrpc version=3 Jan 28 00:23:59.451949 systemd[1]: Started cri-containerd-a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26.scope - libcontainer container a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26. Jan 28 00:23:59.495000 audit: BPF prog-id=192 op=LOAD Jan 28 00:23:59.499648 kernel: kauditd_printk_skb: 90 callbacks suppressed Jan 28 00:23:59.499742 kernel: audit: type=1334 audit(1769559839.495:577): prog-id=192 op=LOAD Jan 28 00:23:59.520567 kernel: audit: type=1300 audit(1769559839.495:577): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4186 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:59.495000 audit[4385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4186 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:59.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131373462353337343034383134363138373861373038333937373863 Jan 28 00:23:59.537894 kernel: audit: type=1327 audit(1769559839.495:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131373462353337343034383134363138373861373038333937373863 Jan 28 00:23:59.503000 audit: BPF prog-id=193 op=LOAD Jan 28 00:23:59.543968 kernel: audit: type=1334 audit(1769559839.503:578): prog-id=193 op=LOAD Jan 28 00:23:59.503000 audit[4385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4186 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:59.560138 kernel: audit: type=1300 audit(1769559839.503:578): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4186 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:59.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131373462353337343034383134363138373861373038333937373863 Jan 28 00:23:59.576976 kernel: audit: type=1327 audit(1769559839.503:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131373462353337343034383134363138373861373038333937373863 Jan 28 00:23:59.503000 audit: BPF prog-id=193 op=UNLOAD Jan 28 00:23:59.582657 kernel: audit: type=1334 audit(1769559839.503:579): prog-id=193 op=UNLOAD Jan 28 00:23:59.503000 audit[4385]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:59.598457 kernel: audit: type=1300 audit(1769559839.503:579): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:59.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131373462353337343034383134363138373861373038333937373863 Jan 28 00:23:59.616160 kernel: audit: type=1327 audit(1769559839.503:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131373462353337343034383134363138373861373038333937373863 Jan 28 00:23:59.503000 audit: BPF prog-id=192 op=UNLOAD Jan 28 00:23:59.621409 kernel: audit: type=1334 audit(1769559839.503:580): prog-id=192 op=UNLOAD Jan 28 00:23:59.503000 audit[4385]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:59.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131373462353337343034383134363138373861373038333937373863 Jan 28 00:23:59.503000 audit: BPF prog-id=194 op=LOAD Jan 28 00:23:59.503000 audit[4385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4186 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:23:59.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131373462353337343034383134363138373861373038333937373863 Jan 28 00:23:59.638638 containerd[2198]: time="2026-01-28T00:23:59.638532656Z" level=info msg="StartContainer for \"a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26\" returns successfully" Jan 28 00:24:00.362331 kubelet[3705]: E0128 00:24:00.362107 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:24:00.759834 containerd[2198]: time="2026-01-28T00:24:00.759581449Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:24:00.761586 systemd[1]: cri-containerd-a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26.scope: Deactivated successfully. Jan 28 00:24:00.764097 containerd[2198]: time="2026-01-28T00:24:00.762470717Z" level=info msg="received container exit event container_id:\"a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26\" id:\"a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26\" pid:4398 exited_at:{seconds:1769559840 nanos:762270264}" Jan 28 00:24:00.763834 systemd[1]: cri-containerd-a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26.scope: Consumed 319ms CPU time, 193.6M memory peak, 165.9M written to disk. Jan 28 00:24:00.765000 audit: BPF prog-id=194 op=UNLOAD Jan 28 00:24:00.781412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a174b53740481461878a70839778c6220302ebdba420e15e12232e9e04572d26-rootfs.mount: Deactivated successfully. Jan 28 00:24:00.855990 kubelet[3705]: I0128 00:24:00.855925 3705 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 00:24:01.185395 kubelet[3705]: W0128 00:24:00.908999 3705 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4547.1.0-n-77eb5aaac5" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object Jan 28 00:24:01.185395 kubelet[3705]: E0128 00:24:00.909132 3705 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4547.1.0-n-77eb5aaac5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object" logger="UnhandledError" Jan 28 00:24:01.185395 kubelet[3705]: W0128 00:24:00.909185 3705 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4547.1.0-n-77eb5aaac5" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object Jan 28 00:24:01.185395 kubelet[3705]: E0128 00:24:00.909194 3705 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4547.1.0-n-77eb5aaac5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object" logger="UnhandledError" Jan 28 00:24:00.901556 systemd[1]: Created slice kubepods-burstable-pod6961f070_d2ac_4acd_b900_0b7c4fdc8c18.slice - libcontainer container kubepods-burstable-pod6961f070_d2ac_4acd_b900_0b7c4fdc8c18.slice. Jan 28 00:24:01.185612 kubelet[3705]: W0128 00:24:00.909219 3705 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4547.1.0-n-77eb5aaac5" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object Jan 28 00:24:01.185612 kubelet[3705]: E0128 00:24:00.909226 3705 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4547.1.0-n-77eb5aaac5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object" logger="UnhandledError" Jan 28 00:24:01.185612 kubelet[3705]: W0128 00:24:00.912474 3705 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4547.1.0-n-77eb5aaac5" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object Jan 28 00:24:01.185612 kubelet[3705]: E0128 00:24:00.912652 3705 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4547.1.0-n-77eb5aaac5\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object" logger="UnhandledError" Jan 28 00:24:01.185612 kubelet[3705]: W0128 00:24:00.912546 3705 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4547.1.0-n-77eb5aaac5" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object Jan 28 00:24:00.909260 systemd[1]: Created slice kubepods-besteffort-podace8222a_010a_4f04_ad26_8197c0467a4d.slice - libcontainer container kubepods-besteffort-podace8222a_010a_4f04_ad26_8197c0467a4d.slice. Jan 28 00:24:01.185726 kubelet[3705]: E0128 00:24:00.912690 3705 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4547.1.0-n-77eb5aaac5\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547.1.0-n-77eb5aaac5' and this object" logger="UnhandledError" Jan 28 00:24:01.185726 kubelet[3705]: I0128 00:24:00.969586 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxx48\" (UniqueName: \"kubernetes.io/projected/ace8222a-010a-4f04-ad26-8197c0467a4d-kube-api-access-cxx48\") pod \"whisker-96ffb7cdb-26585\" (UID: \"ace8222a-010a-4f04-ad26-8197c0467a4d\") " pod="calico-system/whisker-96ffb7cdb-26585" Jan 28 00:24:01.185726 kubelet[3705]: I0128 00:24:00.969610 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d62fc2fd-8ccc-48de-b10b-98a0aa5672ea-calico-apiserver-certs\") pod \"calico-apiserver-6db66f5c9f-n9l6g\" (UID: \"d62fc2fd-8ccc-48de-b10b-98a0aa5672ea\") " pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" Jan 28 00:24:01.185726 kubelet[3705]: I0128 00:24:00.969630 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-backend-key-pair\") pod \"whisker-96ffb7cdb-26585\" (UID: \"ace8222a-010a-4f04-ad26-8197c0467a4d\") " pod="calico-system/whisker-96ffb7cdb-26585" Jan 28 00:24:01.185726 kubelet[3705]: I0128 00:24:00.969641 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcvr7\" (UniqueName: \"kubernetes.io/projected/d774fe09-bd7c-498b-91a3-e6c2f720c9c3-kube-api-access-rcvr7\") pod \"calico-apiserver-6db66f5c9f-tsvdr\" (UID: \"d774fe09-bd7c-498b-91a3-e6c2f720c9c3\") " pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" Jan 28 00:24:00.916987 systemd[1]: Created slice kubepods-besteffort-podd62fc2fd_8ccc_48de_b10b_98a0aa5672ea.slice - libcontainer container kubepods-besteffort-podd62fc2fd_8ccc_48de_b10b_98a0aa5672ea.slice. Jan 28 00:24:01.185856 kubelet[3705]: I0128 00:24:00.969654 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5v98\" (UniqueName: \"kubernetes.io/projected/d62fc2fd-8ccc-48de-b10b-98a0aa5672ea-kube-api-access-p5v98\") pod \"calico-apiserver-6db66f5c9f-n9l6g\" (UID: \"d62fc2fd-8ccc-48de-b10b-98a0aa5672ea\") " pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" Jan 28 00:24:01.185856 kubelet[3705]: I0128 00:24:00.969680 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/37d401e3-39ef-4596-8144-de1aba842d50-goldmane-key-pair\") pod \"goldmane-666569f655-gxssn\" (UID: \"37d401e3-39ef-4596-8144-de1aba842d50\") " pod="calico-system/goldmane-666569f655-gxssn" Jan 28 00:24:01.185856 kubelet[3705]: I0128 00:24:00.969708 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d774fe09-bd7c-498b-91a3-e6c2f720c9c3-calico-apiserver-certs\") pod \"calico-apiserver-6db66f5c9f-tsvdr\" (UID: \"d774fe09-bd7c-498b-91a3-e6c2f720c9c3\") " pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" Jan 28 00:24:01.185856 kubelet[3705]: I0128 00:24:00.969735 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z95d\" (UniqueName: \"kubernetes.io/projected/f8bf6ab5-ad5c-41b7-962e-92c73fabe079-kube-api-access-8z95d\") pod \"calico-apiserver-65cb8f7567-dv7tt\" (UID: \"f8bf6ab5-ad5c-41b7-962e-92c73fabe079\") " pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" Jan 28 00:24:01.185856 kubelet[3705]: I0128 00:24:00.969754 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lkvb\" (UniqueName: \"kubernetes.io/projected/11c0eb0b-ad29-4c1b-b01f-f65a107c6011-kube-api-access-6lkvb\") pod \"calico-kube-controllers-758f45684b-zdqfk\" (UID: \"11c0eb0b-ad29-4c1b-b01f-f65a107c6011\") " pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" Jan 28 00:24:00.924948 systemd[1]: Created slice kubepods-burstable-pod15a1f57d_6d54_4a40_8a77_34f9abd91cfa.slice - libcontainer container kubepods-burstable-pod15a1f57d_6d54_4a40_8a77_34f9abd91cfa.slice. Jan 28 00:24:01.185967 kubelet[3705]: I0128 00:24:00.969768 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgf8q\" (UniqueName: \"kubernetes.io/projected/6961f070-d2ac-4acd-b900-0b7c4fdc8c18-kube-api-access-fgf8q\") pod \"coredns-668d6bf9bc-q9kdk\" (UID: \"6961f070-d2ac-4acd-b900-0b7c4fdc8c18\") " pod="kube-system/coredns-668d6bf9bc-q9kdk" Jan 28 00:24:01.185967 kubelet[3705]: I0128 00:24:00.969778 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15a1f57d-6d54-4a40-8a77-34f9abd91cfa-config-volume\") pod \"coredns-668d6bf9bc-gj4l2\" (UID: \"15a1f57d-6d54-4a40-8a77-34f9abd91cfa\") " pod="kube-system/coredns-668d6bf9bc-gj4l2" Jan 28 00:24:01.185967 kubelet[3705]: I0128 00:24:00.969799 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37d401e3-39ef-4596-8144-de1aba842d50-goldmane-ca-bundle\") pod \"goldmane-666569f655-gxssn\" (UID: \"37d401e3-39ef-4596-8144-de1aba842d50\") " pod="calico-system/goldmane-666569f655-gxssn" Jan 28 00:24:01.185967 kubelet[3705]: I0128 00:24:00.969836 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6961f070-d2ac-4acd-b900-0b7c4fdc8c18-config-volume\") pod \"coredns-668d6bf9bc-q9kdk\" (UID: \"6961f070-d2ac-4acd-b900-0b7c4fdc8c18\") " pod="kube-system/coredns-668d6bf9bc-q9kdk" Jan 28 00:24:01.185967 kubelet[3705]: I0128 00:24:00.969864 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-ca-bundle\") pod \"whisker-96ffb7cdb-26585\" (UID: \"ace8222a-010a-4f04-ad26-8197c0467a4d\") " pod="calico-system/whisker-96ffb7cdb-26585" Jan 28 00:24:00.929627 systemd[1]: Created slice kubepods-besteffort-pod11c0eb0b_ad29_4c1b_b01f_f65a107c6011.slice - libcontainer container kubepods-besteffort-pod11c0eb0b_ad29_4c1b_b01f_f65a107c6011.slice. Jan 28 00:24:01.186070 kubelet[3705]: I0128 00:24:00.969879 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11c0eb0b-ad29-4c1b-b01f-f65a107c6011-tigera-ca-bundle\") pod \"calico-kube-controllers-758f45684b-zdqfk\" (UID: \"11c0eb0b-ad29-4c1b-b01f-f65a107c6011\") " pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" Jan 28 00:24:01.186070 kubelet[3705]: I0128 00:24:00.969889 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxzf\" (UniqueName: \"kubernetes.io/projected/37d401e3-39ef-4596-8144-de1aba842d50-kube-api-access-bhxzf\") pod \"goldmane-666569f655-gxssn\" (UID: \"37d401e3-39ef-4596-8144-de1aba842d50\") " pod="calico-system/goldmane-666569f655-gxssn" Jan 28 00:24:01.186070 kubelet[3705]: I0128 00:24:00.969918 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8bf6ab5-ad5c-41b7-962e-92c73fabe079-calico-apiserver-certs\") pod \"calico-apiserver-65cb8f7567-dv7tt\" (UID: \"f8bf6ab5-ad5c-41b7-962e-92c73fabe079\") " pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" Jan 28 00:24:01.186070 kubelet[3705]: I0128 00:24:00.969931 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d401e3-39ef-4596-8144-de1aba842d50-config\") pod \"goldmane-666569f655-gxssn\" (UID: \"37d401e3-39ef-4596-8144-de1aba842d50\") " pod="calico-system/goldmane-666569f655-gxssn" Jan 28 00:24:01.186070 kubelet[3705]: I0128 00:24:00.969945 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8pws\" (UniqueName: \"kubernetes.io/projected/15a1f57d-6d54-4a40-8a77-34f9abd91cfa-kube-api-access-j8pws\") pod \"coredns-668d6bf9bc-gj4l2\" (UID: \"15a1f57d-6d54-4a40-8a77-34f9abd91cfa\") " pod="kube-system/coredns-668d6bf9bc-gj4l2" Jan 28 00:24:00.933705 systemd[1]: Created slice kubepods-besteffort-pod37d401e3_39ef_4596_8144_de1aba842d50.slice - libcontainer container kubepods-besteffort-pod37d401e3_39ef_4596_8144_de1aba842d50.slice. Jan 28 00:24:00.941384 systemd[1]: Created slice kubepods-besteffort-podd774fe09_bd7c_498b_91a3_e6c2f720c9c3.slice - libcontainer container kubepods-besteffort-podd774fe09_bd7c_498b_91a3_e6c2f720c9c3.slice. Jan 28 00:24:00.947380 systemd[1]: Created slice kubepods-besteffort-podf8bf6ab5_ad5c_41b7_962e_92c73fabe079.slice - libcontainer container kubepods-besteffort-podf8bf6ab5_ad5c_41b7_962e_92c73fabe079.slice. Jan 28 00:24:01.489505 containerd[2198]: time="2026-01-28T00:24:01.489371540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q9kdk,Uid:6961f070-d2ac-4acd-b900-0b7c4fdc8c18,Namespace:kube-system,Attempt:0,}" Jan 28 00:24:01.498196 containerd[2198]: time="2026-01-28T00:24:01.498163292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f45684b-zdqfk,Uid:11c0eb0b-ad29-4c1b-b01f-f65a107c6011,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:01.498266 containerd[2198]: time="2026-01-28T00:24:01.498231525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gj4l2,Uid:15a1f57d-6d54-4a40-8a77-34f9abd91cfa,Namespace:kube-system,Attempt:0,}" Jan 28 00:24:01.696333 containerd[2198]: time="2026-01-28T00:24:01.696279432Z" level=error msg="Failed to destroy network for sandbox \"5375ccfdba2f53ccc097d1860a00d41d5be4d0b78431a6f82f122454283592ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.697255 containerd[2198]: time="2026-01-28T00:24:01.697213545Z" level=error msg="Failed to destroy network for sandbox \"854782ca1f46b8d54a02a136eccb0c6285d6f940e6e9b15f0f2f13fb3c77b274\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.706384 containerd[2198]: time="2026-01-28T00:24:01.705708352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q9kdk,Uid:6961f070-d2ac-4acd-b900-0b7c4fdc8c18,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5375ccfdba2f53ccc097d1860a00d41d5be4d0b78431a6f82f122454283592ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.706536 kubelet[3705]: E0128 00:24:01.705990 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5375ccfdba2f53ccc097d1860a00d41d5be4d0b78431a6f82f122454283592ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.706536 kubelet[3705]: E0128 00:24:01.706055 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5375ccfdba2f53ccc097d1860a00d41d5be4d0b78431a6f82f122454283592ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q9kdk" Jan 28 00:24:01.706536 kubelet[3705]: E0128 00:24:01.706071 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5375ccfdba2f53ccc097d1860a00d41d5be4d0b78431a6f82f122454283592ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q9kdk" Jan 28 00:24:01.707219 kubelet[3705]: E0128 00:24:01.706102 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q9kdk_kube-system(6961f070-d2ac-4acd-b900-0b7c4fdc8c18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q9kdk_kube-system(6961f070-d2ac-4acd-b900-0b7c4fdc8c18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5375ccfdba2f53ccc097d1860a00d41d5be4d0b78431a6f82f122454283592ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q9kdk" podUID="6961f070-d2ac-4acd-b900-0b7c4fdc8c18" Jan 28 00:24:01.712389 containerd[2198]: time="2026-01-28T00:24:01.712354909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f45684b-zdqfk,Uid:11c0eb0b-ad29-4c1b-b01f-f65a107c6011,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"854782ca1f46b8d54a02a136eccb0c6285d6f940e6e9b15f0f2f13fb3c77b274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.713091 kubelet[3705]: E0128 00:24:01.712501 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"854782ca1f46b8d54a02a136eccb0c6285d6f940e6e9b15f0f2f13fb3c77b274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.713091 kubelet[3705]: E0128 00:24:01.712550 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"854782ca1f46b8d54a02a136eccb0c6285d6f940e6e9b15f0f2f13fb3c77b274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" Jan 28 00:24:01.713091 kubelet[3705]: E0128 00:24:01.712565 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"854782ca1f46b8d54a02a136eccb0c6285d6f940e6e9b15f0f2f13fb3c77b274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" Jan 28 00:24:01.713290 kubelet[3705]: E0128 00:24:01.712710 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-758f45684b-zdqfk_calico-system(11c0eb0b-ad29-4c1b-b01f-f65a107c6011)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-758f45684b-zdqfk_calico-system(11c0eb0b-ad29-4c1b-b01f-f65a107c6011)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"854782ca1f46b8d54a02a136eccb0c6285d6f940e6e9b15f0f2f13fb3c77b274\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:24:01.717377 containerd[2198]: time="2026-01-28T00:24:01.717352853Z" level=error msg="Failed to destroy network for sandbox \"a5f08d5fca7a8e541b00dcc040acd7d06a08176deeb5ce73db2a15ce9514c876\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.724108 containerd[2198]: time="2026-01-28T00:24:01.724037659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gj4l2,Uid:15a1f57d-6d54-4a40-8a77-34f9abd91cfa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f08d5fca7a8e541b00dcc040acd7d06a08176deeb5ce73db2a15ce9514c876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.724278 kubelet[3705]: E0128 00:24:01.724258 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f08d5fca7a8e541b00dcc040acd7d06a08176deeb5ce73db2a15ce9514c876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:01.724377 kubelet[3705]: E0128 00:24:01.724359 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f08d5fca7a8e541b00dcc040acd7d06a08176deeb5ce73db2a15ce9514c876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gj4l2" Jan 28 00:24:01.724456 kubelet[3705]: E0128 00:24:01.724434 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f08d5fca7a8e541b00dcc040acd7d06a08176deeb5ce73db2a15ce9514c876\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gj4l2" Jan 28 00:24:01.724530 kubelet[3705]: E0128 00:24:01.724512 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gj4l2_kube-system(15a1f57d-6d54-4a40-8a77-34f9abd91cfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gj4l2_kube-system(15a1f57d-6d54-4a40-8a77-34f9abd91cfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f08d5fca7a8e541b00dcc040acd7d06a08176deeb5ce73db2a15ce9514c876\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gj4l2" podUID="15a1f57d-6d54-4a40-8a77-34f9abd91cfa" Jan 28 00:24:01.780611 systemd[1]: run-netns-cni\x2d5b3cda7f\x2de86a\x2d831e\x2ddeb8\x2dba9624c0b45e.mount: Deactivated successfully. Jan 28 00:24:02.071200 kubelet[3705]: E0128 00:24:02.071144 3705 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 28 00:24:02.071698 kubelet[3705]: E0128 00:24:02.071377 3705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37d401e3-39ef-4596-8144-de1aba842d50-goldmane-key-pair podName:37d401e3-39ef-4596-8144-de1aba842d50 nodeName:}" failed. No retries permitted until 2026-01-28 00:24:02.571354905 +0000 UTC m=+32.265951701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/37d401e3-39ef-4596-8144-de1aba842d50-goldmane-key-pair") pod "goldmane-666569f655-gxssn" (UID: "37d401e3-39ef-4596-8144-de1aba842d50") : failed to sync secret cache: timed out waiting for the condition Jan 28 00:24:02.072374 kubelet[3705]: E0128 00:24:02.072296 3705 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 28 00:24:02.072374 kubelet[3705]: E0128 00:24:02.072349 3705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-backend-key-pair podName:ace8222a-010a-4f04-ad26-8197c0467a4d nodeName:}" failed. No retries permitted until 2026-01-28 00:24:02.572340059 +0000 UTC m=+32.266936855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-backend-key-pair") pod "whisker-96ffb7cdb-26585" (UID: "ace8222a-010a-4f04-ad26-8197c0467a4d") : failed to sync secret cache: timed out waiting for the condition Jan 28 00:24:02.073504 kubelet[3705]: E0128 00:24:02.073460 3705 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 28 00:24:02.073601 kubelet[3705]: E0128 00:24:02.073579 3705 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-ca-bundle podName:ace8222a-010a-4f04-ad26-8197c0467a4d nodeName:}" failed. No retries permitted until 2026-01-28 00:24:02.573493411 +0000 UTC m=+32.268090215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-ca-bundle") pod "whisker-96ffb7cdb-26585" (UID: "ace8222a-010a-4f04-ad26-8197c0467a4d") : failed to sync configmap cache: timed out waiting for the condition Jan 28 00:24:02.098827 containerd[2198]: time="2026-01-28T00:24:02.098781210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-tsvdr,Uid:d774fe09-bd7c-498b-91a3-e6c2f720c9c3,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:02.099146 containerd[2198]: time="2026-01-28T00:24:02.098781194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-n9l6g,Uid:d62fc2fd-8ccc-48de-b10b-98a0aa5672ea,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:02.102293 containerd[2198]: time="2026-01-28T00:24:02.102266617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cb8f7567-dv7tt,Uid:f8bf6ab5-ad5c-41b7-962e-92c73fabe079,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:02.154633 containerd[2198]: time="2026-01-28T00:24:02.154541351Z" level=error msg="Failed to destroy network for sandbox \"3597b898e77ef21484c58dcb69c9b93e8a971ff7570955e72c486a94e94962d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.162872 containerd[2198]: time="2026-01-28T00:24:02.162300818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-tsvdr,Uid:d774fe09-bd7c-498b-91a3-e6c2f720c9c3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3597b898e77ef21484c58dcb69c9b93e8a971ff7570955e72c486a94e94962d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.164096 kubelet[3705]: E0128 00:24:02.163571 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3597b898e77ef21484c58dcb69c9b93e8a971ff7570955e72c486a94e94962d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.164096 kubelet[3705]: E0128 00:24:02.163615 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3597b898e77ef21484c58dcb69c9b93e8a971ff7570955e72c486a94e94962d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" Jan 28 00:24:02.164096 kubelet[3705]: E0128 00:24:02.163630 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3597b898e77ef21484c58dcb69c9b93e8a971ff7570955e72c486a94e94962d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" Jan 28 00:24:02.164215 kubelet[3705]: E0128 00:24:02.163663 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6db66f5c9f-tsvdr_calico-apiserver(d774fe09-bd7c-498b-91a3-e6c2f720c9c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6db66f5c9f-tsvdr_calico-apiserver(d774fe09-bd7c-498b-91a3-e6c2f720c9c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3597b898e77ef21484c58dcb69c9b93e8a971ff7570955e72c486a94e94962d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:24:02.176828 containerd[2198]: time="2026-01-28T00:24:02.176776068Z" level=error msg="Failed to destroy network for sandbox \"e333f92c48a5cbd550ed3e4319adc678eea86a408f25589a03ea1415d47cd6e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.184386 containerd[2198]: time="2026-01-28T00:24:02.184351586Z" level=error msg="Failed to destroy network for sandbox \"e7d31b348370f9defdb633461fc9a64a42123f1b3ce3541875001ae93ffd2b2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.185446 containerd[2198]: time="2026-01-28T00:24:02.185419311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cb8f7567-dv7tt,Uid:f8bf6ab5-ad5c-41b7-962e-92c73fabe079,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e333f92c48a5cbd550ed3e4319adc678eea86a408f25589a03ea1415d47cd6e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.185655 kubelet[3705]: E0128 00:24:02.185631 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e333f92c48a5cbd550ed3e4319adc678eea86a408f25589a03ea1415d47cd6e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.185766 kubelet[3705]: E0128 00:24:02.185747 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e333f92c48a5cbd550ed3e4319adc678eea86a408f25589a03ea1415d47cd6e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" Jan 28 00:24:02.185852 kubelet[3705]: E0128 00:24:02.185836 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e333f92c48a5cbd550ed3e4319adc678eea86a408f25589a03ea1415d47cd6e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" Jan 28 00:24:02.185959 kubelet[3705]: E0128 00:24:02.185941 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cb8f7567-dv7tt_calico-apiserver(f8bf6ab5-ad5c-41b7-962e-92c73fabe079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cb8f7567-dv7tt_calico-apiserver(f8bf6ab5-ad5c-41b7-962e-92c73fabe079)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e333f92c48a5cbd550ed3e4319adc678eea86a408f25589a03ea1415d47cd6e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:24:02.191861 containerd[2198]: time="2026-01-28T00:24:02.191807765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-n9l6g,Uid:d62fc2fd-8ccc-48de-b10b-98a0aa5672ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d31b348370f9defdb633461fc9a64a42123f1b3ce3541875001ae93ffd2b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.192042 kubelet[3705]: E0128 00:24:02.192022 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d31b348370f9defdb633461fc9a64a42123f1b3ce3541875001ae93ffd2b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.192123 kubelet[3705]: E0128 00:24:02.192111 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d31b348370f9defdb633461fc9a64a42123f1b3ce3541875001ae93ffd2b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" Jan 28 00:24:02.192329 kubelet[3705]: E0128 00:24:02.192160 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d31b348370f9defdb633461fc9a64a42123f1b3ce3541875001ae93ffd2b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" Jan 28 00:24:02.192329 kubelet[3705]: E0128 00:24:02.192191 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6db66f5c9f-n9l6g_calico-apiserver(d62fc2fd-8ccc-48de-b10b-98a0aa5672ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6db66f5c9f-n9l6g_calico-apiserver(d62fc2fd-8ccc-48de-b10b-98a0aa5672ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7d31b348370f9defdb633461fc9a64a42123f1b3ce3541875001ae93ffd2b2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:24:02.366174 systemd[1]: Created slice kubepods-besteffort-podf6d08a70_95be_4168_8a2f_3e965a6278e2.slice - libcontainer container kubepods-besteffort-podf6d08a70_95be_4168_8a2f_3e965a6278e2.slice. Jan 28 00:24:02.368997 containerd[2198]: time="2026-01-28T00:24:02.368963663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2sfv,Uid:f6d08a70-95be-4168-8a2f-3e965a6278e2,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:02.408419 containerd[2198]: time="2026-01-28T00:24:02.408381879Z" level=error msg="Failed to destroy network for sandbox \"b1406b406ed8b5c33697e1b83995907a167fd940318cb04c06f5e993c5f925f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.415242 containerd[2198]: time="2026-01-28T00:24:02.415207929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2sfv,Uid:f6d08a70-95be-4168-8a2f-3e965a6278e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1406b406ed8b5c33697e1b83995907a167fd940318cb04c06f5e993c5f925f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.415738 kubelet[3705]: E0128 00:24:02.415356 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1406b406ed8b5c33697e1b83995907a167fd940318cb04c06f5e993c5f925f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.415738 kubelet[3705]: E0128 00:24:02.415394 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1406b406ed8b5c33697e1b83995907a167fd940318cb04c06f5e993c5f925f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:24:02.415738 kubelet[3705]: E0128 00:24:02.415410 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1406b406ed8b5c33697e1b83995907a167fd940318cb04c06f5e993c5f925f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:24:02.415861 kubelet[3705]: E0128 00:24:02.415437 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1406b406ed8b5c33697e1b83995907a167fd940318cb04c06f5e993c5f925f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:24:02.506666 containerd[2198]: time="2026-01-28T00:24:02.506620583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 00:24:02.698624 containerd[2198]: time="2026-01-28T00:24:02.698375342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96ffb7cdb-26585,Uid:ace8222a-010a-4f04-ad26-8197c0467a4d,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:02.698793 containerd[2198]: time="2026-01-28T00:24:02.698773561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxssn,Uid:37d401e3-39ef-4596-8144-de1aba842d50,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:02.751135 containerd[2198]: time="2026-01-28T00:24:02.751094968Z" level=error msg="Failed to destroy network for sandbox \"859c91d3b7ba56c0b1febd939f44e3b79f923897f68a759cb8561d4592e31c26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.752477 containerd[2198]: time="2026-01-28T00:24:02.752454956Z" level=error msg="Failed to destroy network for sandbox \"e065862920260cd1a504885d4a877d21e9b84b6f645cb8627e81953070dda058\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.758074 containerd[2198]: time="2026-01-28T00:24:02.758047477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96ffb7cdb-26585,Uid:ace8222a-010a-4f04-ad26-8197c0467a4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"859c91d3b7ba56c0b1febd939f44e3b79f923897f68a759cb8561d4592e31c26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.758416 kubelet[3705]: E0128 00:24:02.758373 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"859c91d3b7ba56c0b1febd939f44e3b79f923897f68a759cb8561d4592e31c26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.758647 kubelet[3705]: E0128 00:24:02.758431 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"859c91d3b7ba56c0b1febd939f44e3b79f923897f68a759cb8561d4592e31c26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-96ffb7cdb-26585" Jan 28 00:24:02.758647 kubelet[3705]: E0128 00:24:02.758448 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"859c91d3b7ba56c0b1febd939f44e3b79f923897f68a759cb8561d4592e31c26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-96ffb7cdb-26585" Jan 28 00:24:02.758647 kubelet[3705]: E0128 00:24:02.758500 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-96ffb7cdb-26585_calico-system(ace8222a-010a-4f04-ad26-8197c0467a4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-96ffb7cdb-26585_calico-system(ace8222a-010a-4f04-ad26-8197c0467a4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"859c91d3b7ba56c0b1febd939f44e3b79f923897f68a759cb8561d4592e31c26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-96ffb7cdb-26585" podUID="ace8222a-010a-4f04-ad26-8197c0467a4d" Jan 28 00:24:02.764027 containerd[2198]: time="2026-01-28T00:24:02.763958973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxssn,Uid:37d401e3-39ef-4596-8144-de1aba842d50,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e065862920260cd1a504885d4a877d21e9b84b6f645cb8627e81953070dda058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.764254 kubelet[3705]: E0128 00:24:02.764221 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e065862920260cd1a504885d4a877d21e9b84b6f645cb8627e81953070dda058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:02.764304 kubelet[3705]: E0128 00:24:02.764263 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e065862920260cd1a504885d4a877d21e9b84b6f645cb8627e81953070dda058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gxssn" Jan 28 00:24:02.764304 kubelet[3705]: E0128 00:24:02.764278 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e065862920260cd1a504885d4a877d21e9b84b6f645cb8627e81953070dda058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gxssn" Jan 28 00:24:02.764346 kubelet[3705]: E0128 00:24:02.764312 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gxssn_calico-system(37d401e3-39ef-4596-8144-de1aba842d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gxssn_calico-system(37d401e3-39ef-4596-8144-de1aba842d50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e065862920260cd1a504885d4a877d21e9b84b6f645cb8627e81953070dda058\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:24:12.362745 containerd[2198]: time="2026-01-28T00:24:12.362658431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gj4l2,Uid:15a1f57d-6d54-4a40-8a77-34f9abd91cfa,Namespace:kube-system,Attempt:0,}" Jan 28 00:24:12.363436 containerd[2198]: time="2026-01-28T00:24:12.363411676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q9kdk,Uid:6961f070-d2ac-4acd-b900-0b7c4fdc8c18,Namespace:kube-system,Attempt:0,}" Jan 28 00:24:12.446543 containerd[2198]: time="2026-01-28T00:24:12.446452991Z" level=error msg="Failed to destroy network for sandbox \"7a9237f84b14539e31da254fc7d241cd42cdf84e7ed2098c23c1bb403db3cb3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:12.447897 systemd[1]: run-netns-cni\x2d3d5a8539\x2dd9a3\x2d2d81\x2d6181\x2ddcd74328bdec.mount: Deactivated successfully. Jan 28 00:24:12.451616 containerd[2198]: time="2026-01-28T00:24:12.451579044Z" level=error msg="Failed to destroy network for sandbox \"d89cb8977475d255b19dc678a1873718d831941eb14f7040d61b79030f9d7f1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:12.453507 systemd[1]: run-netns-cni\x2dac1f4aae\x2db2b7\x2d5629\x2ddc72\x2db62afd77a62c.mount: Deactivated successfully. Jan 28 00:24:12.456847 containerd[2198]: time="2026-01-28T00:24:12.456795763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gj4l2,Uid:15a1f57d-6d54-4a40-8a77-34f9abd91cfa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9237f84b14539e31da254fc7d241cd42cdf84e7ed2098c23c1bb403db3cb3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:12.457577 kubelet[3705]: E0128 00:24:12.457226 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9237f84b14539e31da254fc7d241cd42cdf84e7ed2098c23c1bb403db3cb3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:12.457577 kubelet[3705]: E0128 00:24:12.457384 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9237f84b14539e31da254fc7d241cd42cdf84e7ed2098c23c1bb403db3cb3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gj4l2" Jan 28 00:24:12.457577 kubelet[3705]: E0128 00:24:12.457400 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9237f84b14539e31da254fc7d241cd42cdf84e7ed2098c23c1bb403db3cb3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gj4l2" Jan 28 00:24:12.458313 kubelet[3705]: E0128 00:24:12.457437 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gj4l2_kube-system(15a1f57d-6d54-4a40-8a77-34f9abd91cfa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gj4l2_kube-system(15a1f57d-6d54-4a40-8a77-34f9abd91cfa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a9237f84b14539e31da254fc7d241cd42cdf84e7ed2098c23c1bb403db3cb3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gj4l2" podUID="15a1f57d-6d54-4a40-8a77-34f9abd91cfa" Jan 28 00:24:12.463663 containerd[2198]: time="2026-01-28T00:24:12.463604885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q9kdk,Uid:6961f070-d2ac-4acd-b900-0b7c4fdc8c18,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89cb8977475d255b19dc678a1873718d831941eb14f7040d61b79030f9d7f1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:12.463752 kubelet[3705]: E0128 00:24:12.463726 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89cb8977475d255b19dc678a1873718d831941eb14f7040d61b79030f9d7f1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:12.463778 kubelet[3705]: E0128 00:24:12.463758 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89cb8977475d255b19dc678a1873718d831941eb14f7040d61b79030f9d7f1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q9kdk" Jan 28 00:24:12.463778 kubelet[3705]: E0128 00:24:12.463771 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89cb8977475d255b19dc678a1873718d831941eb14f7040d61b79030f9d7f1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q9kdk" Jan 28 00:24:12.463813 kubelet[3705]: E0128 00:24:12.463792 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q9kdk_kube-system(6961f070-d2ac-4acd-b900-0b7c4fdc8c18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q9kdk_kube-system(6961f070-d2ac-4acd-b900-0b7c4fdc8c18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d89cb8977475d255b19dc678a1873718d831941eb14f7040d61b79030f9d7f1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q9kdk" podUID="6961f070-d2ac-4acd-b900-0b7c4fdc8c18" Jan 28 00:24:13.362094 containerd[2198]: time="2026-01-28T00:24:13.362052846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-n9l6g,Uid:d62fc2fd-8ccc-48de-b10b-98a0aa5672ea,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:13.362884 containerd[2198]: time="2026-01-28T00:24:13.362847691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f45684b-zdqfk,Uid:11c0eb0b-ad29-4c1b-b01f-f65a107c6011,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:13.716092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880307463.mount: Deactivated successfully. Jan 28 00:24:14.362556 containerd[2198]: time="2026-01-28T00:24:14.362430079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-tsvdr,Uid:d774fe09-bd7c-498b-91a3-e6c2f720c9c3,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:14.362694 containerd[2198]: time="2026-01-28T00:24:14.362585635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cb8f7567-dv7tt,Uid:f8bf6ab5-ad5c-41b7-962e-92c73fabe079,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:15.361863 containerd[2198]: time="2026-01-28T00:24:15.361807883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxssn,Uid:37d401e3-39ef-4596-8144-de1aba842d50,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:16.361973 containerd[2198]: time="2026-01-28T00:24:16.361925806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96ffb7cdb-26585,Uid:ace8222a-010a-4f04-ad26-8197c0467a4d,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:17.362067 containerd[2198]: time="2026-01-28T00:24:17.362024328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2sfv,Uid:f6d08a70-95be-4168-8a2f-3e965a6278e2,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:22.508804 containerd[2198]: time="2026-01-28T00:24:22.508754709Z" level=error msg="Failed to destroy network for sandbox \"44da53bb1f37cad7c87d6ac2fa1b61c710df64e2330e145c0a6afd39b174444f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.511133 systemd[1]: run-netns-cni\x2dd8b32592\x2d2034\x2d1b31\x2dac61\x2d3c14befc1b37.mount: Deactivated successfully. Jan 28 00:24:22.726039 containerd[2198]: time="2026-01-28T00:24:22.725994370Z" level=error msg="Failed to destroy network for sandbox \"2f92033365fa5e000d69bcf1917d92eee6b8fdeb0519b1c77600766d9971b211\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.800570 containerd[2198]: time="2026-01-28T00:24:22.800408397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-n9l6g,Uid:d62fc2fd-8ccc-48de-b10b-98a0aa5672ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44da53bb1f37cad7c87d6ac2fa1b61c710df64e2330e145c0a6afd39b174444f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.801010 kubelet[3705]: E0128 00:24:22.800831 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44da53bb1f37cad7c87d6ac2fa1b61c710df64e2330e145c0a6afd39b174444f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.801010 kubelet[3705]: E0128 00:24:22.800911 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44da53bb1f37cad7c87d6ac2fa1b61c710df64e2330e145c0a6afd39b174444f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" Jan 28 00:24:22.802054 kubelet[3705]: E0128 00:24:22.801999 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44da53bb1f37cad7c87d6ac2fa1b61c710df64e2330e145c0a6afd39b174444f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" Jan 28 00:24:22.802171 kubelet[3705]: E0128 00:24:22.802133 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6db66f5c9f-n9l6g_calico-apiserver(d62fc2fd-8ccc-48de-b10b-98a0aa5672ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6db66f5c9f-n9l6g_calico-apiserver(d62fc2fd-8ccc-48de-b10b-98a0aa5672ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44da53bb1f37cad7c87d6ac2fa1b61c710df64e2330e145c0a6afd39b174444f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:24:22.814982 containerd[2198]: time="2026-01-28T00:24:22.814952447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:24:22.818245 containerd[2198]: time="2026-01-28T00:24:22.818137528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f45684b-zdqfk,Uid:11c0eb0b-ad29-4c1b-b01f-f65a107c6011,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f92033365fa5e000d69bcf1917d92eee6b8fdeb0519b1c77600766d9971b211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.818922 kubelet[3705]: E0128 00:24:22.818608 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f92033365fa5e000d69bcf1917d92eee6b8fdeb0519b1c77600766d9971b211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.818922 kubelet[3705]: E0128 00:24:22.818868 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f92033365fa5e000d69bcf1917d92eee6b8fdeb0519b1c77600766d9971b211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" Jan 28 00:24:22.818922 kubelet[3705]: E0128 00:24:22.818895 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f92033365fa5e000d69bcf1917d92eee6b8fdeb0519b1c77600766d9971b211\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" Jan 28 00:24:22.819197 kubelet[3705]: E0128 00:24:22.819069 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-758f45684b-zdqfk_calico-system(11c0eb0b-ad29-4c1b-b01f-f65a107c6011)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-758f45684b-zdqfk_calico-system(11c0eb0b-ad29-4c1b-b01f-f65a107c6011)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f92033365fa5e000d69bcf1917d92eee6b8fdeb0519b1c77600766d9971b211\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:24:22.835349 containerd[2198]: time="2026-01-28T00:24:22.835302331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 28 00:24:22.841200 containerd[2198]: time="2026-01-28T00:24:22.841163757Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:24:22.848572 containerd[2198]: time="2026-01-28T00:24:22.848542225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:24:22.849204 containerd[2198]: time="2026-01-28T00:24:22.849166210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 20.342504114s" Jan 28 00:24:22.849204 containerd[2198]: time="2026-01-28T00:24:22.849189955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 28 00:24:22.861483 containerd[2198]: time="2026-01-28T00:24:22.861446390Z" level=info msg="CreateContainer within sandbox \"25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 00:24:22.890897 containerd[2198]: time="2026-01-28T00:24:22.890796491Z" level=error msg="Failed to destroy network for sandbox \"2e058325477e8a0523878489c68b61d415bfa8c791a4b4e1cd8317d0e0eaa14f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.896420 containerd[2198]: time="2026-01-28T00:24:22.896395470Z" level=error msg="Failed to destroy network for sandbox \"aa32fa3bfab0bb0161cdadfad6e1cb906687abf1790972248ba419bc6082df76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.900657 containerd[2198]: time="2026-01-28T00:24:22.900633323Z" level=error msg="Failed to destroy network for sandbox \"58b2372eccfc3fef164ba0b3d42e440a3fcb91ce0d753920e1e8ee42cc3c7d01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.902107 containerd[2198]: time="2026-01-28T00:24:22.902079659Z" level=info msg="Container eaa35c31edbdfd7092a1be354104420436cdf3e02871d142657ac31d0440589a: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:24:22.904723 containerd[2198]: time="2026-01-28T00:24:22.904699188Z" level=error msg="Failed to destroy network for sandbox \"a88a8994b4ce7b645c6ef4b6ee5cce5f636f183796aebb2e82b9f6c47a1e1e71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.907596 containerd[2198]: time="2026-01-28T00:24:22.907567803Z" level=error msg="Failed to destroy network for sandbox \"de08959ab83f80007cd98564653eff36331bd8b31ff4882d4fe23657df9e29b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.909327 containerd[2198]: time="2026-01-28T00:24:22.909279130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-tsvdr,Uid:d774fe09-bd7c-498b-91a3-e6c2f720c9c3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e058325477e8a0523878489c68b61d415bfa8c791a4b4e1cd8317d0e0eaa14f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.909627 kubelet[3705]: E0128 00:24:22.909424 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e058325477e8a0523878489c68b61d415bfa8c791a4b4e1cd8317d0e0eaa14f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.909627 kubelet[3705]: E0128 00:24:22.909460 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e058325477e8a0523878489c68b61d415bfa8c791a4b4e1cd8317d0e0eaa14f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" Jan 28 00:24:22.909627 kubelet[3705]: E0128 00:24:22.909474 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e058325477e8a0523878489c68b61d415bfa8c791a4b4e1cd8317d0e0eaa14f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" Jan 28 00:24:22.909726 kubelet[3705]: E0128 00:24:22.909509 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6db66f5c9f-tsvdr_calico-apiserver(d774fe09-bd7c-498b-91a3-e6c2f720c9c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6db66f5c9f-tsvdr_calico-apiserver(d774fe09-bd7c-498b-91a3-e6c2f720c9c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e058325477e8a0523878489c68b61d415bfa8c791a4b4e1cd8317d0e0eaa14f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:24:22.923078 containerd[2198]: time="2026-01-28T00:24:22.923038615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cb8f7567-dv7tt,Uid:f8bf6ab5-ad5c-41b7-962e-92c73fabe079,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b2372eccfc3fef164ba0b3d42e440a3fcb91ce0d753920e1e8ee42cc3c7d01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.923223 kubelet[3705]: E0128 00:24:22.923188 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b2372eccfc3fef164ba0b3d42e440a3fcb91ce0d753920e1e8ee42cc3c7d01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.923257 kubelet[3705]: E0128 00:24:22.923239 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b2372eccfc3fef164ba0b3d42e440a3fcb91ce0d753920e1e8ee42cc3c7d01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" Jan 28 00:24:22.923281 kubelet[3705]: E0128 00:24:22.923252 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58b2372eccfc3fef164ba0b3d42e440a3fcb91ce0d753920e1e8ee42cc3c7d01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" Jan 28 00:24:22.923545 kubelet[3705]: E0128 00:24:22.923286 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65cb8f7567-dv7tt_calico-apiserver(f8bf6ab5-ad5c-41b7-962e-92c73fabe079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65cb8f7567-dv7tt_calico-apiserver(f8bf6ab5-ad5c-41b7-962e-92c73fabe079)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58b2372eccfc3fef164ba0b3d42e440a3fcb91ce0d753920e1e8ee42cc3c7d01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:24:22.933700 containerd[2198]: time="2026-01-28T00:24:22.933664565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxssn,Uid:37d401e3-39ef-4596-8144-de1aba842d50,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa32fa3bfab0bb0161cdadfad6e1cb906687abf1790972248ba419bc6082df76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.933862 kubelet[3705]: E0128 00:24:22.933810 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa32fa3bfab0bb0161cdadfad6e1cb906687abf1790972248ba419bc6082df76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.933907 kubelet[3705]: E0128 00:24:22.933871 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa32fa3bfab0bb0161cdadfad6e1cb906687abf1790972248ba419bc6082df76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gxssn" Jan 28 00:24:22.933907 kubelet[3705]: E0128 00:24:22.933884 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa32fa3bfab0bb0161cdadfad6e1cb906687abf1790972248ba419bc6082df76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gxssn" Jan 28 00:24:22.933942 kubelet[3705]: E0128 00:24:22.933913 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gxssn_calico-system(37d401e3-39ef-4596-8144-de1aba842d50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gxssn_calico-system(37d401e3-39ef-4596-8144-de1aba842d50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa32fa3bfab0bb0161cdadfad6e1cb906687abf1790972248ba419bc6082df76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:24:22.941447 containerd[2198]: time="2026-01-28T00:24:22.941418716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96ffb7cdb-26585,Uid:ace8222a-010a-4f04-ad26-8197c0467a4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88a8994b4ce7b645c6ef4b6ee5cce5f636f183796aebb2e82b9f6c47a1e1e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.941919 kubelet[3705]: E0128 00:24:22.941899 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88a8994b4ce7b645c6ef4b6ee5cce5f636f183796aebb2e82b9f6c47a1e1e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.942028 kubelet[3705]: E0128 00:24:22.942012 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88a8994b4ce7b645c6ef4b6ee5cce5f636f183796aebb2e82b9f6c47a1e1e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-96ffb7cdb-26585" Jan 28 00:24:22.942255 kubelet[3705]: E0128 00:24:22.942090 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a88a8994b4ce7b645c6ef4b6ee5cce5f636f183796aebb2e82b9f6c47a1e1e71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-96ffb7cdb-26585" Jan 28 00:24:22.942255 kubelet[3705]: E0128 00:24:22.942128 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-96ffb7cdb-26585_calico-system(ace8222a-010a-4f04-ad26-8197c0467a4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-96ffb7cdb-26585_calico-system(ace8222a-010a-4f04-ad26-8197c0467a4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a88a8994b4ce7b645c6ef4b6ee5cce5f636f183796aebb2e82b9f6c47a1e1e71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-96ffb7cdb-26585" podUID="ace8222a-010a-4f04-ad26-8197c0467a4d" Jan 28 00:24:22.944252 containerd[2198]: time="2026-01-28T00:24:22.944222594Z" level=info msg="CreateContainer within sandbox \"25a8ed7ba5fb7f49eb4ae466a99bd7cb6781dac8a7c99fbf22ff5b50d27a5ff4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eaa35c31edbdfd7092a1be354104420436cdf3e02871d142657ac31d0440589a\"" Jan 28 00:24:22.945989 containerd[2198]: time="2026-01-28T00:24:22.944436664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2sfv,Uid:f6d08a70-95be-4168-8a2f-3e965a6278e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de08959ab83f80007cd98564653eff36331bd8b31ff4882d4fe23657df9e29b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.945989 containerd[2198]: time="2026-01-28T00:24:22.944642469Z" level=info msg="StartContainer for \"eaa35c31edbdfd7092a1be354104420436cdf3e02871d142657ac31d0440589a\"" Jan 28 00:24:22.945989 containerd[2198]: time="2026-01-28T00:24:22.945625496Z" level=info msg="connecting to shim eaa35c31edbdfd7092a1be354104420436cdf3e02871d142657ac31d0440589a" address="unix:///run/containerd/s/fe23a4f70d666a4ce463b69b07f261c5bc175748e81b49f7a5d5cbc7a4698e96" protocol=ttrpc version=3 Jan 28 00:24:22.946934 kubelet[3705]: E0128 00:24:22.946874 3705 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de08959ab83f80007cd98564653eff36331bd8b31ff4882d4fe23657df9e29b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:24:22.947430 kubelet[3705]: E0128 00:24:22.947350 3705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de08959ab83f80007cd98564653eff36331bd8b31ff4882d4fe23657df9e29b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:24:22.947430 kubelet[3705]: E0128 00:24:22.947373 3705 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de08959ab83f80007cd98564653eff36331bd8b31ff4882d4fe23657df9e29b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2sfv" Jan 28 00:24:22.947430 kubelet[3705]: E0128 00:24:22.947401 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de08959ab83f80007cd98564653eff36331bd8b31ff4882d4fe23657df9e29b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:24:22.965980 systemd[1]: Started cri-containerd-eaa35c31edbdfd7092a1be354104420436cdf3e02871d142657ac31d0440589a.scope - libcontainer container eaa35c31edbdfd7092a1be354104420436cdf3e02871d142657ac31d0440589a. Jan 28 00:24:23.013348 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 00:24:23.013454 kernel: audit: type=1334 audit(1769559863.005:583): prog-id=195 op=LOAD Jan 28 00:24:23.005000 audit: BPF prog-id=195 op=LOAD Jan 28 00:24:23.005000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4186 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:23.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613335633331656462646664373039326131626533353431303434 Jan 28 00:24:23.030996 kernel: audit: type=1300 audit(1769559863.005:583): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4186 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:23.008000 audit: BPF prog-id=196 op=LOAD Jan 28 00:24:23.053278 kernel: audit: type=1327 audit(1769559863.005:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613335633331656462646664373039326131626533353431303434 Jan 28 00:24:23.071115 kernel: audit: type=1334 audit(1769559863.008:584): prog-id=196 op=LOAD Jan 28 00:24:23.071253 kernel: audit: type=1300 audit(1769559863.008:584): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4186 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:23.008000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4186 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:23.008000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613335633331656462646664373039326131626533353431303434 Jan 28 00:24:23.074231 containerd[2198]: time="2026-01-28T00:24:23.057944157Z" level=info msg="StartContainer for \"eaa35c31edbdfd7092a1be354104420436cdf3e02871d142657ac31d0440589a\" returns successfully" Jan 28 00:24:23.087495 kernel: audit: type=1327 audit(1769559863.008:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613335633331656462646664373039326131626533353431303434 Jan 28 00:24:23.012000 audit: BPF prog-id=196 op=UNLOAD Jan 28 00:24:23.012000 audit[4926]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:23.096879 kernel: audit: type=1334 audit(1769559863.012:585): prog-id=196 op=UNLOAD Jan 28 00:24:23.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613335633331656462646664373039326131626533353431303434 Jan 28 00:24:23.114781 kernel: audit: type=1300 audit(1769559863.012:585): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:23.012000 audit: BPF prog-id=195 op=UNLOAD Jan 28 00:24:23.135950 kernel: audit: type=1327 audit(1769559863.012:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613335633331656462646664373039326131626533353431303434 Jan 28 00:24:23.136018 kernel: audit: type=1334 audit(1769559863.012:586): prog-id=195 op=UNLOAD Jan 28 00:24:23.012000 audit[4926]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4186 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:23.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613335633331656462646664373039326131626533353431303434 Jan 28 00:24:23.012000 audit: BPF prog-id=197 op=LOAD Jan 28 00:24:23.012000 audit[4926]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4186 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:23.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613335633331656462646664373039326131626533353431303434 Jan 28 00:24:23.390200 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 00:24:23.390299 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 00:24:23.514208 systemd[1]: run-netns-cni\x2d3fc92b79\x2dcfd2\x2dd6bc\x2d9868\x2d711f8799f130.mount: Deactivated successfully. Jan 28 00:24:23.514280 systemd[1]: run-netns-cni\x2de8d5a40f\x2da52a\x2dcdc2\x2dd5c8\x2d844c1315c022.mount: Deactivated successfully. Jan 28 00:24:23.514314 systemd[1]: run-netns-cni\x2d0435d68e\x2d482b\x2df852\x2d1d35\x2d300636d7aa74.mount: Deactivated successfully. Jan 28 00:24:23.514353 systemd[1]: run-netns-cni\x2db0eafb94\x2dc231\x2d9dbb\x2dce0a\x2d409666a743c7.mount: Deactivated successfully. Jan 28 00:24:23.570107 kubelet[3705]: I0128 00:24:23.569936 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gtwbg" podStartSLOduration=1.8832607430000001 podStartE2EDuration="33.569921935s" podCreationTimestamp="2026-01-28 00:23:50 +0000 UTC" firstStartedPulling="2026-01-28 00:23:51.166558915 +0000 UTC m=+20.861155711" lastFinishedPulling="2026-01-28 00:24:22.853220107 +0000 UTC m=+52.547816903" observedRunningTime="2026-01-28 00:24:23.568933732 +0000 UTC m=+53.263530544" watchObservedRunningTime="2026-01-28 00:24:23.569921935 +0000 UTC m=+53.264518731" Jan 28 00:24:23.599850 kubelet[3705]: I0128 00:24:23.599780 3705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-backend-key-pair\") pod \"ace8222a-010a-4f04-ad26-8197c0467a4d\" (UID: \"ace8222a-010a-4f04-ad26-8197c0467a4d\") " Jan 28 00:24:23.599850 kubelet[3705]: I0128 00:24:23.599840 3705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxx48\" (UniqueName: \"kubernetes.io/projected/ace8222a-010a-4f04-ad26-8197c0467a4d-kube-api-access-cxx48\") pod \"ace8222a-010a-4f04-ad26-8197c0467a4d\" (UID: \"ace8222a-010a-4f04-ad26-8197c0467a4d\") " Jan 28 00:24:23.599850 kubelet[3705]: I0128 00:24:23.599856 3705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-ca-bundle\") pod \"ace8222a-010a-4f04-ad26-8197c0467a4d\" (UID: \"ace8222a-010a-4f04-ad26-8197c0467a4d\") " Jan 28 00:24:23.609002 kubelet[3705]: I0128 00:24:23.608519 3705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ace8222a-010a-4f04-ad26-8197c0467a4d" (UID: "ace8222a-010a-4f04-ad26-8197c0467a4d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 00:24:23.610812 systemd[1]: var-lib-kubelet-pods-ace8222a\x2d010a\x2d4f04\x2dad26\x2d8197c0467a4d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 00:24:23.613881 systemd[1]: var-lib-kubelet-pods-ace8222a\x2d010a\x2d4f04\x2dad26\x2d8197c0467a4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcxx48.mount: Deactivated successfully. Jan 28 00:24:23.614066 kubelet[3705]: I0128 00:24:23.613513 3705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace8222a-010a-4f04-ad26-8197c0467a4d-kube-api-access-cxx48" (OuterVolumeSpecName: "kube-api-access-cxx48") pod "ace8222a-010a-4f04-ad26-8197c0467a4d" (UID: "ace8222a-010a-4f04-ad26-8197c0467a4d"). InnerVolumeSpecName "kube-api-access-cxx48". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:24:23.627828 kubelet[3705]: I0128 00:24:23.627556 3705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ace8222a-010a-4f04-ad26-8197c0467a4d" (UID: "ace8222a-010a-4f04-ad26-8197c0467a4d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:24:23.700613 kubelet[3705]: I0128 00:24:23.700560 3705 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-backend-key-pair\") on node \"ci-4547.1.0-n-77eb5aaac5\" DevicePath \"\"" Jan 28 00:24:23.700613 kubelet[3705]: I0128 00:24:23.700587 3705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cxx48\" (UniqueName: \"kubernetes.io/projected/ace8222a-010a-4f04-ad26-8197c0467a4d-kube-api-access-cxx48\") on node \"ci-4547.1.0-n-77eb5aaac5\" DevicePath \"\"" Jan 28 00:24:23.700613 kubelet[3705]: I0128 00:24:23.700595 3705 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace8222a-010a-4f04-ad26-8197c0467a4d-whisker-ca-bundle\") on node \"ci-4547.1.0-n-77eb5aaac5\" DevicePath \"\"" Jan 28 00:24:24.366924 systemd[1]: Removed slice kubepods-besteffort-podace8222a_010a_4f04_ad26_8197c0467a4d.slice - libcontainer container kubepods-besteffort-podace8222a_010a_4f04_ad26_8197c0467a4d.slice. Jan 28 00:24:24.615769 systemd[1]: Created slice kubepods-besteffort-pode2c157ae_30f5_408a_abb4_61e3e5e3c10f.slice - libcontainer container kubepods-besteffort-pode2c157ae_30f5_408a_abb4_61e3e5e3c10f.slice. Jan 28 00:24:24.706253 kubelet[3705]: I0128 00:24:24.706160 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2c157ae-30f5-408a-abb4-61e3e5e3c10f-whisker-backend-key-pair\") pod \"whisker-65d59647fb-65s4b\" (UID: \"e2c157ae-30f5-408a-abb4-61e3e5e3c10f\") " pod="calico-system/whisker-65d59647fb-65s4b" Jan 28 00:24:24.706253 kubelet[3705]: I0128 00:24:24.706203 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6bk\" (UniqueName: \"kubernetes.io/projected/e2c157ae-30f5-408a-abb4-61e3e5e3c10f-kube-api-access-sl6bk\") pod \"whisker-65d59647fb-65s4b\" (UID: \"e2c157ae-30f5-408a-abb4-61e3e5e3c10f\") " pod="calico-system/whisker-65d59647fb-65s4b" Jan 28 00:24:24.706253 kubelet[3705]: I0128 00:24:24.706218 3705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2c157ae-30f5-408a-abb4-61e3e5e3c10f-whisker-ca-bundle\") pod \"whisker-65d59647fb-65s4b\" (UID: \"e2c157ae-30f5-408a-abb4-61e3e5e3c10f\") " pod="calico-system/whisker-65d59647fb-65s4b" Jan 28 00:24:24.924269 containerd[2198]: time="2026-01-28T00:24:24.924230123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d59647fb-65s4b,Uid:e2c157ae-30f5-408a-abb4-61e3e5e3c10f,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:24.993000 audit: BPF prog-id=198 op=LOAD Jan 28 00:24:24.993000 audit[5141]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff63ea398 a2=98 a3=fffff63ea388 items=0 ppid=5064 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:24.993000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:24:24.993000 audit: BPF prog-id=198 op=UNLOAD Jan 28 00:24:24.993000 audit[5141]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff63ea368 a3=0 items=0 ppid=5064 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:24.993000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:24:24.993000 audit: BPF prog-id=199 op=LOAD Jan 28 00:24:24.993000 audit[5141]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff63ea248 a2=74 a3=95 items=0 ppid=5064 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:24.993000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:24:24.993000 audit: BPF prog-id=199 op=UNLOAD Jan 28 00:24:24.993000 audit[5141]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5064 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:24.993000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:24:24.993000 audit: BPF prog-id=200 op=LOAD Jan 28 00:24:24.993000 audit[5141]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff63ea278 a2=40 a3=fffff63ea2a8 items=0 ppid=5064 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:24.993000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:24:24.994000 audit: BPF prog-id=200 op=UNLOAD Jan 28 00:24:24.994000 audit[5141]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff63ea2a8 items=0 ppid=5064 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:24.994000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:24:25.000000 audit: BPF prog-id=201 op=LOAD Jan 28 00:24:25.000000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc69869a8 a2=98 a3=ffffc6986998 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.000000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.001000 audit: BPF prog-id=201 op=UNLOAD Jan 28 00:24:25.001000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc6986978 a3=0 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.001000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.002000 audit: BPF prog-id=202 op=LOAD Jan 28 00:24:25.002000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc6986638 a2=74 a3=95 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.002000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.003000 audit: BPF prog-id=202 op=UNLOAD Jan 28 00:24:25.003000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.003000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.003000 audit: BPF prog-id=203 op=LOAD Jan 28 00:24:25.003000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc6986698 a2=94 a3=2 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.003000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.003000 audit: BPF prog-id=203 op=UNLOAD Jan 28 00:24:25.003000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.003000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.098000 audit: BPF prog-id=204 op=LOAD Jan 28 00:24:25.098000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc6986658 a2=40 a3=ffffc6986688 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.098000 audit: BPF prog-id=204 op=UNLOAD Jan 28 00:24:25.098000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc6986688 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.107000 audit: BPF prog-id=205 op=LOAD Jan 28 00:24:25.107000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc6986668 a2=94 a3=4 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.107000 audit: BPF prog-id=205 op=UNLOAD Jan 28 00:24:25.107000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.107000 audit: BPF prog-id=206 op=LOAD Jan 28 00:24:25.107000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc69864a8 a2=94 a3=5 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.107000 audit: BPF prog-id=206 op=UNLOAD Jan 28 00:24:25.107000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.107000 audit: BPF prog-id=207 op=LOAD Jan 28 00:24:25.107000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc69866d8 a2=94 a3=6 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.107000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.108000 audit: BPF prog-id=207 op=UNLOAD Jan 28 00:24:25.108000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.108000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.108000 audit: BPF prog-id=208 op=LOAD Jan 28 00:24:25.108000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc6985ea8 a2=94 a3=83 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.108000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.108000 audit: BPF prog-id=209 op=LOAD Jan 28 00:24:25.108000 audit[5144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc6985c68 a2=94 a3=2 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.108000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.108000 audit: BPF prog-id=209 op=UNLOAD Jan 28 00:24:25.108000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.108000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.108000 audit: BPF prog-id=208 op=UNLOAD Jan 28 00:24:25.108000 audit[5144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=14eb4620 a3=14ea7b00 items=0 ppid=5064 pid=5144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.108000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:24:25.117000 audit: BPF prog-id=210 op=LOAD Jan 28 00:24:25.117000 audit[5180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6827898 a2=98 a3=fffff6827888 items=0 ppid=5064 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.117000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:24:25.117000 audit: BPF prog-id=210 op=UNLOAD Jan 28 00:24:25.117000 audit[5180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff6827868 a3=0 items=0 ppid=5064 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.117000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:24:25.117000 audit: BPF prog-id=211 op=LOAD Jan 28 00:24:25.117000 audit[5180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6827748 a2=74 a3=95 items=0 ppid=5064 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.117000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:24:25.117000 audit: BPF prog-id=211 op=UNLOAD Jan 28 00:24:25.117000 audit[5180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5064 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.117000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:24:25.117000 audit: BPF prog-id=212 op=LOAD Jan 28 00:24:25.117000 audit[5180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6827778 a2=40 a3=fffff68277a8 items=0 ppid=5064 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.117000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:24:25.117000 audit: BPF prog-id=212 op=UNLOAD Jan 28 00:24:25.117000 audit[5180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff68277a8 items=0 ppid=5064 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.117000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:24:25.187214 systemd-networkd[1903]: vxlan.calico: Link UP Jan 28 00:24:25.188001 systemd-networkd[1903]: vxlan.calico: Gained carrier Jan 28 00:24:25.190225 systemd-networkd[1903]: calia00a67d3107: Link UP Jan 28 00:24:25.191143 systemd-networkd[1903]: calia00a67d3107: Gained carrier Jan 28 00:24:25.219000 audit: BPF prog-id=213 op=LOAD Jan 28 00:24:25.219000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0e24fe8 a2=98 a3=ffffd0e24fd8 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.219000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.219000 audit: BPF prog-id=213 op=UNLOAD Jan 28 00:24:25.219000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd0e24fb8 a3=0 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.219000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.221277 containerd[2198]: 2026-01-28 00:24:25.106 [INFO][5161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0 whisker-65d59647fb- calico-system e2c157ae-30f5-408a-abb4-61e3e5e3c10f 961 0 2026-01-28 00:24:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65d59647fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 whisker-65d59647fb-65s4b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia00a67d3107 [] [] }} ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Namespace="calico-system" Pod="whisker-65d59647fb-65s4b" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-" Jan 28 00:24:25.221277 containerd[2198]: 2026-01-28 00:24:25.107 [INFO][5161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Namespace="calico-system" Pod="whisker-65d59647fb-65s4b" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" Jan 28 00:24:25.221277 containerd[2198]: 2026-01-28 00:24:25.133 [INFO][5175] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" HandleID="k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.133 [INFO][5175] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" HandleID="k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"whisker-65d59647fb-65s4b", "timestamp":"2026-01-28 00:24:25.133586941 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.133 [INFO][5175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.133 [INFO][5175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.133 [INFO][5175] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.139 [INFO][5175] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.143 [INFO][5175] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.148 [INFO][5175] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.151 [INFO][5175] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221387 containerd[2198]: 2026-01-28 00:24:25.152 [INFO][5175] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221513 containerd[2198]: 2026-01-28 00:24:25.152 [INFO][5175] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221513 containerd[2198]: 2026-01-28 00:24:25.154 [INFO][5175] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a Jan 28 00:24:25.221513 containerd[2198]: 2026-01-28 00:24:25.161 [INFO][5175] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221513 containerd[2198]: 2026-01-28 00:24:25.170 [INFO][5175] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.193/26] block=192.168.11.192/26 handle="k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221513 containerd[2198]: 2026-01-28 00:24:25.170 [INFO][5175] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.193/26] handle="k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.221513 containerd[2198]: 2026-01-28 00:24:25.170 [INFO][5175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:25.221513 containerd[2198]: 2026-01-28 00:24:25.170 [INFO][5175] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.193/26] IPv6=[] ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" HandleID="k8s-pod-network.4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" Jan 28 00:24:25.221603 containerd[2198]: 2026-01-28 00:24:25.174 [INFO][5161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Namespace="calico-system" Pod="whisker-65d59647fb-65s4b" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0", GenerateName:"whisker-65d59647fb-", Namespace:"calico-system", SelfLink:"", UID:"e2c157ae-30f5-408a-abb4-61e3e5e3c10f", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65d59647fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"whisker-65d59647fb-65s4b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia00a67d3107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:25.221603 containerd[2198]: 2026-01-28 00:24:25.175 [INFO][5161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.193/32] ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Namespace="calico-system" Pod="whisker-65d59647fb-65s4b" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" Jan 28 00:24:25.221650 containerd[2198]: 2026-01-28 00:24:25.175 [INFO][5161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia00a67d3107 ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Namespace="calico-system" Pod="whisker-65d59647fb-65s4b" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" Jan 28 00:24:25.221650 containerd[2198]: 2026-01-28 00:24:25.192 [INFO][5161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Namespace="calico-system" Pod="whisker-65d59647fb-65s4b" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" Jan 28 00:24:25.221678 containerd[2198]: 2026-01-28 00:24:25.193 [INFO][5161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Namespace="calico-system" Pod="whisker-65d59647fb-65s4b" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0", GenerateName:"whisker-65d59647fb-", Namespace:"calico-system", SelfLink:"", UID:"e2c157ae-30f5-408a-abb4-61e3e5e3c10f", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 24, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65d59647fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a", Pod:"whisker-65d59647fb-65s4b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.11.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia00a67d3107", MAC:"aa:05:2d:eb:c6:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:25.221711 containerd[2198]: 2026-01-28 00:24:25.215 [INFO][5161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" Namespace="calico-system" Pod="whisker-65d59647fb-65s4b" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-whisker--65d59647fb--65s4b-eth0" Jan 28 00:24:25.222000 audit: BPF prog-id=214 op=LOAD Jan 28 00:24:25.222000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0e24cc8 a2=74 a3=95 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.222000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.222000 audit: BPF prog-id=214 op=UNLOAD Jan 28 00:24:25.222000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.222000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.222000 audit: BPF prog-id=215 op=LOAD Jan 28 00:24:25.222000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0e24d28 a2=94 a3=2 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.222000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.223000 audit: BPF prog-id=215 op=UNLOAD Jan 28 00:24:25.223000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.223000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.223000 audit: BPF prog-id=216 op=LOAD Jan 28 00:24:25.223000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd0e24ba8 a2=40 a3=ffffd0e24bd8 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.223000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.223000 audit: BPF prog-id=216 op=UNLOAD Jan 28 00:24:25.223000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd0e24bd8 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.223000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.223000 audit: BPF prog-id=217 op=LOAD Jan 28 00:24:25.223000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd0e24cf8 a2=94 a3=b7 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.223000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.223000 audit: BPF prog-id=217 op=UNLOAD Jan 28 00:24:25.223000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.223000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.226000 audit: BPF prog-id=218 op=LOAD Jan 28 00:24:25.226000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd0e243a8 a2=94 a3=2 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.226000 audit: BPF prog-id=218 op=UNLOAD Jan 28 00:24:25.226000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.226000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.227000 audit: BPF prog-id=219 op=LOAD Jan 28 00:24:25.227000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd0e24538 a2=94 a3=30 items=0 ppid=5064 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:24:25.233000 audit: BPF prog-id=220 op=LOAD Jan 28 00:24:25.233000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc00fe168 a2=98 a3=ffffc00fe158 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.233000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.233000 audit: BPF prog-id=220 op=UNLOAD Jan 28 00:24:25.233000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc00fe138 a3=0 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.233000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.233000 audit: BPF prog-id=221 op=LOAD Jan 28 00:24:25.233000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc00fddf8 a2=74 a3=95 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.233000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.234000 audit: BPF prog-id=221 op=UNLOAD Jan 28 00:24:25.234000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.234000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.234000 audit: BPF prog-id=222 op=LOAD Jan 28 00:24:25.234000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc00fde58 a2=94 a3=2 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.234000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.234000 audit: BPF prog-id=222 op=UNLOAD Jan 28 00:24:25.234000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.234000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.265980 containerd[2198]: time="2026-01-28T00:24:25.265859082Z" level=info msg="connecting to shim 4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a" address="unix:///run/containerd/s/a0716124141740e9915b1eed0e3cbd44665d494dea754d9e8abd1dacf053189f" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:25.291332 systemd[1]: Started cri-containerd-4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a.scope - libcontainer container 4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a. Jan 28 00:24:25.326000 audit: BPF prog-id=223 op=LOAD Jan 28 00:24:25.327000 audit: BPF prog-id=224 op=LOAD Jan 28 00:24:25.327000 audit[5242]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5232 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363636613633633563653362376135306330376531396331313335 Jan 28 00:24:25.327000 audit: BPF prog-id=224 op=UNLOAD Jan 28 00:24:25.327000 audit[5242]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5232 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363636613633633563653362376135306330376531396331313335 Jan 28 00:24:25.327000 audit: BPF prog-id=225 op=LOAD Jan 28 00:24:25.327000 audit[5242]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5232 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363636613633633563653362376135306330376531396331313335 Jan 28 00:24:25.327000 audit: BPF prog-id=226 op=LOAD Jan 28 00:24:25.327000 audit[5242]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5232 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363636613633633563653362376135306330376531396331313335 Jan 28 00:24:25.327000 audit: BPF prog-id=226 op=UNLOAD Jan 28 00:24:25.327000 audit[5242]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5232 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363636613633633563653362376135306330376531396331313335 Jan 28 00:24:25.327000 audit: BPF prog-id=225 op=UNLOAD Jan 28 00:24:25.327000 audit[5242]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5232 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363636613633633563653362376135306330376531396331313335 Jan 28 00:24:25.327000 audit: BPF prog-id=227 op=LOAD Jan 28 00:24:25.327000 audit[5242]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5232 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463363636613633633563653362376135306330376531396331313335 Jan 28 00:24:25.336000 audit: BPF prog-id=228 op=LOAD Jan 28 00:24:25.336000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc00fde18 a2=40 a3=ffffc00fde48 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.336000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.336000 audit: BPF prog-id=228 op=UNLOAD Jan 28 00:24:25.336000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc00fde48 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.336000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.345000 audit: BPF prog-id=229 op=LOAD Jan 28 00:24:25.345000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc00fde28 a2=94 a3=4 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.345000 audit: BPF prog-id=229 op=UNLOAD Jan 28 00:24:25.345000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.345000 audit: BPF prog-id=230 op=LOAD Jan 28 00:24:25.345000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc00fdc68 a2=94 a3=5 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.345000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.346000 audit: BPF prog-id=230 op=UNLOAD Jan 28 00:24:25.346000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.346000 audit: BPF prog-id=231 op=LOAD Jan 28 00:24:25.346000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc00fde98 a2=94 a3=6 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.346000 audit: BPF prog-id=231 op=UNLOAD Jan 28 00:24:25.346000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.346000 audit: BPF prog-id=232 op=LOAD Jan 28 00:24:25.346000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc00fd668 a2=94 a3=83 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.346000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.347000 audit: BPF prog-id=233 op=LOAD Jan 28 00:24:25.347000 audit[5223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc00fd428 a2=94 a3=2 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.347000 audit: BPF prog-id=233 op=UNLOAD Jan 28 00:24:25.347000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.347000 audit: BPF prog-id=232 op=UNLOAD Jan 28 00:24:25.347000 audit[5223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1e39e620 a3=1e391b00 items=0 ppid=5064 pid=5223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.347000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:24:25.353000 audit: BPF prog-id=219 op=UNLOAD Jan 28 00:24:25.353000 audit[5064]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000d444c0 a2=0 a3=0 items=0 ppid=5041 pid=5064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.353000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 00:24:25.362042 containerd[2198]: time="2026-01-28T00:24:25.362005015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q9kdk,Uid:6961f070-d2ac-4acd-b900-0b7c4fdc8c18,Namespace:kube-system,Attempt:0,}" Jan 28 00:24:25.393570 containerd[2198]: time="2026-01-28T00:24:25.393479326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d59647fb-65s4b,Uid:e2c157ae-30f5-408a-abb4-61e3e5e3c10f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c666a63c5ce3b7a50c07e19c113540fc905e1d293e71d39493232cfb7ce308a\"" Jan 28 00:24:25.394733 containerd[2198]: time="2026-01-28T00:24:25.394684840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:24:25.476368 systemd-networkd[1903]: calic1ad9b2955c: Link UP Jan 28 00:24:25.478861 systemd-networkd[1903]: calic1ad9b2955c: Gained carrier Jan 28 00:24:25.497263 containerd[2198]: 2026-01-28 00:24:25.425 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0 coredns-668d6bf9bc- kube-system 6961f070-d2ac-4acd-b900-0b7c4fdc8c18 844 0 2026-01-28 00:23:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 coredns-668d6bf9bc-q9kdk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1ad9b2955c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Namespace="kube-system" Pod="coredns-668d6bf9bc-q9kdk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-" Jan 28 00:24:25.497263 containerd[2198]: 2026-01-28 00:24:25.425 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Namespace="kube-system" Pod="coredns-668d6bf9bc-q9kdk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" Jan 28 00:24:25.497263 containerd[2198]: 2026-01-28 00:24:25.443 [INFO][5293] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" HandleID="k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.443 [INFO][5293] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" HandleID="k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3620), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"coredns-668d6bf9bc-q9kdk", "timestamp":"2026-01-28 00:24:25.443061659 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.443 [INFO][5293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.443 [INFO][5293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.443 [INFO][5293] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.447 [INFO][5293] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.451 [INFO][5293] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.454 [INFO][5293] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.456 [INFO][5293] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497393 containerd[2198]: 2026-01-28 00:24:25.457 [INFO][5293] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497538 containerd[2198]: 2026-01-28 00:24:25.457 [INFO][5293] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497538 containerd[2198]: 2026-01-28 00:24:25.459 [INFO][5293] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53 Jan 28 00:24:25.497538 containerd[2198]: 2026-01-28 00:24:25.466 [INFO][5293] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497538 containerd[2198]: 2026-01-28 00:24:25.471 [INFO][5293] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.194/26] block=192.168.11.192/26 handle="k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497538 containerd[2198]: 2026-01-28 00:24:25.471 [INFO][5293] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.194/26] handle="k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:25.497538 containerd[2198]: 2026-01-28 00:24:25.471 [INFO][5293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:25.497538 containerd[2198]: 2026-01-28 00:24:25.471 [INFO][5293] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.194/26] IPv6=[] ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" HandleID="k8s-pod-network.1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" Jan 28 00:24:25.497640 containerd[2198]: 2026-01-28 00:24:25.473 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Namespace="kube-system" Pod="coredns-668d6bf9bc-q9kdk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6961f070-d2ac-4acd-b900-0b7c4fdc8c18", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"coredns-668d6bf9bc-q9kdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1ad9b2955c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:25.497640 containerd[2198]: 2026-01-28 00:24:25.473 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.194/32] ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Namespace="kube-system" Pod="coredns-668d6bf9bc-q9kdk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" Jan 28 00:24:25.497640 containerd[2198]: 2026-01-28 00:24:25.473 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1ad9b2955c ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Namespace="kube-system" Pod="coredns-668d6bf9bc-q9kdk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" Jan 28 00:24:25.497640 containerd[2198]: 2026-01-28 00:24:25.479 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Namespace="kube-system" Pod="coredns-668d6bf9bc-q9kdk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" Jan 28 00:24:25.497640 containerd[2198]: 2026-01-28 00:24:25.479 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Namespace="kube-system" Pod="coredns-668d6bf9bc-q9kdk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6961f070-d2ac-4acd-b900-0b7c4fdc8c18", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53", Pod:"coredns-668d6bf9bc-q9kdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1ad9b2955c", MAC:"c6:f7:35:fa:5c:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:25.497640 containerd[2198]: 2026-01-28 00:24:25.494 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" Namespace="kube-system" Pod="coredns-668d6bf9bc-q9kdk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--q9kdk-eth0" Jan 28 00:24:25.538907 containerd[2198]: time="2026-01-28T00:24:25.538427762Z" level=info msg="connecting to shim 1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53" address="unix:///run/containerd/s/d07480760f44685e4637bf38f0b86d22a205ad7021268d8146991fc3bbaf6956" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:25.547000 audit[5316]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:25.547000 audit[5316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd9b7b150 a2=0 a3=ffff882fdfa8 items=0 ppid=5064 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.547000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:25.554000 audit[5344]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5344 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:25.554000 audit[5344]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffebbed5a0 a2=0 a3=ffffb0921fa8 items=0 ppid=5064 pid=5344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.554000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:25.563036 systemd[1]: Started cri-containerd-1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53.scope - libcontainer container 1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53. Jan 28 00:24:25.571000 audit: BPF prog-id=234 op=LOAD Jan 28 00:24:25.572000 audit: BPF prog-id=235 op=LOAD Jan 28 00:24:25.572000 audit[5336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5324 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353765323065666466346334376237313134373863663539623738 Jan 28 00:24:25.572000 audit: BPF prog-id=235 op=UNLOAD Jan 28 00:24:25.572000 audit[5336]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5324 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353765323065666466346334376237313134373863663539623738 Jan 28 00:24:25.572000 audit[5315]: NETFILTER_CFG table=raw:126 family=2 entries=21 op=nft_register_chain pid=5315 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:25.572000 audit[5315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffdd9dff00 a2=0 a3=ffff82fb2fa8 items=0 ppid=5064 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.572000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:25.573000 audit: BPF prog-id=236 op=LOAD Jan 28 00:24:25.573000 audit[5336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5324 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353765323065666466346334376237313134373863663539623738 Jan 28 00:24:25.573000 audit: BPF prog-id=237 op=LOAD Jan 28 00:24:25.573000 audit[5336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5324 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353765323065666466346334376237313134373863663539623738 Jan 28 00:24:25.573000 audit: BPF prog-id=237 op=UNLOAD Jan 28 00:24:25.573000 audit[5336]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5324 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353765323065666466346334376237313134373863663539623738 Jan 28 00:24:25.573000 audit: BPF prog-id=236 op=UNLOAD Jan 28 00:24:25.573000 audit[5336]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5324 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353765323065666466346334376237313134373863663539623738 Jan 28 00:24:25.573000 audit: BPF prog-id=238 op=LOAD Jan 28 00:24:25.573000 audit[5336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5324 pid=5336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353765323065666466346334376237313134373863663539623738 Jan 28 00:24:25.595182 containerd[2198]: time="2026-01-28T00:24:25.595159884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q9kdk,Uid:6961f070-d2ac-4acd-b900-0b7c4fdc8c18,Namespace:kube-system,Attempt:0,} returns sandbox id \"1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53\"" Jan 28 00:24:25.597475 containerd[2198]: time="2026-01-28T00:24:25.597440203Z" level=info msg="CreateContainer within sandbox \"1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:24:25.598000 audit[5342]: NETFILTER_CFG table=filter:127 family=2 entries=39 op=nft_register_chain pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:25.598000 audit[5342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffc1870bd0 a2=0 a3=ffff86068fa8 items=0 ppid=5064 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.598000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:25.618842 containerd[2198]: time="2026-01-28T00:24:25.618513379Z" level=info msg="Container 650fc92c7f587d3bf7fbe18b6cec30ccf0b1c981a2227b4128ade8e6f9c977cf: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:24:25.635056 containerd[2198]: time="2026-01-28T00:24:25.635026556Z" level=info msg="CreateContainer within sandbox \"1157e20efdf4c47b711478cf59b78a46ece7e336fb3a7f3e19aa95ce901d4e53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"650fc92c7f587d3bf7fbe18b6cec30ccf0b1c981a2227b4128ade8e6f9c977cf\"" Jan 28 00:24:25.635523 containerd[2198]: time="2026-01-28T00:24:25.635496881Z" level=info msg="StartContainer for \"650fc92c7f587d3bf7fbe18b6cec30ccf0b1c981a2227b4128ade8e6f9c977cf\"" Jan 28 00:24:25.636331 containerd[2198]: time="2026-01-28T00:24:25.636307751Z" level=info msg="connecting to shim 650fc92c7f587d3bf7fbe18b6cec30ccf0b1c981a2227b4128ade8e6f9c977cf" address="unix:///run/containerd/s/d07480760f44685e4637bf38f0b86d22a205ad7021268d8146991fc3bbaf6956" protocol=ttrpc version=3 Jan 28 00:24:25.650958 systemd[1]: Started cri-containerd-650fc92c7f587d3bf7fbe18b6cec30ccf0b1c981a2227b4128ade8e6f9c977cf.scope - libcontainer container 650fc92c7f587d3bf7fbe18b6cec30ccf0b1c981a2227b4128ade8e6f9c977cf. Jan 28 00:24:25.659000 audit: BPF prog-id=239 op=LOAD Jan 28 00:24:25.660000 audit: BPF prog-id=240 op=LOAD Jan 28 00:24:25.660000 audit[5376]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5324 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306663393263376635383764336266376662653138623663656333 Jan 28 00:24:25.660000 audit: BPF prog-id=240 op=UNLOAD Jan 28 00:24:25.660000 audit[5376]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5324 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306663393263376635383764336266376662653138623663656333 Jan 28 00:24:25.660000 audit: BPF prog-id=241 op=LOAD Jan 28 00:24:25.660000 audit[5376]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5324 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306663393263376635383764336266376662653138623663656333 Jan 28 00:24:25.660000 audit: BPF prog-id=242 op=LOAD Jan 28 00:24:25.660000 audit[5376]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5324 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306663393263376635383764336266376662653138623663656333 Jan 28 00:24:25.660000 audit: BPF prog-id=242 op=UNLOAD Jan 28 00:24:25.660000 audit[5376]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5324 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306663393263376635383764336266376662653138623663656333 Jan 28 00:24:25.660000 audit: BPF prog-id=241 op=UNLOAD Jan 28 00:24:25.660000 audit[5376]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5324 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306663393263376635383764336266376662653138623663656333 Jan 28 00:24:25.660000 audit: BPF prog-id=243 op=LOAD Jan 28 00:24:25.660000 audit[5376]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5324 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635306663393263376635383764336266376662653138623663656333 Jan 28 00:24:25.672336 containerd[2198]: time="2026-01-28T00:24:25.672310388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:25.676891 containerd[2198]: time="2026-01-28T00:24:25.676852809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:24:25.677047 containerd[2198]: time="2026-01-28T00:24:25.676912763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:25.677225 kubelet[3705]: E0128 00:24:25.677186 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:24:25.677578 kubelet[3705]: E0128 00:24:25.677423 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:24:25.681055 containerd[2198]: time="2026-01-28T00:24:25.680999964Z" level=info msg="StartContainer for \"650fc92c7f587d3bf7fbe18b6cec30ccf0b1c981a2227b4128ade8e6f9c977cf\" returns successfully" Jan 28 00:24:25.684493 kubelet[3705]: E0128 00:24:25.684387 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5b63b438c56a4f4382ff93bbd04b95ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sl6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65d59647fb-65s4b_calico-system(e2c157ae-30f5-408a-abb4-61e3e5e3c10f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:25.686420 containerd[2198]: time="2026-01-28T00:24:25.686259982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:24:25.620000 audit[5371]: NETFILTER_CFG table=filter:128 family=2 entries=93 op=nft_register_chain pid=5371 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:25.620000 audit[5371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=55512 a0=3 a1=ffffd9daff10 a2=0 a3=ffff82500fa8 items=0 ppid=5064 pid=5371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:25.620000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:25.955264 containerd[2198]: time="2026-01-28T00:24:25.955094847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:25.983976 containerd[2198]: time="2026-01-28T00:24:25.983883299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:24:25.983976 containerd[2198]: time="2026-01-28T00:24:25.983939941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:25.984247 kubelet[3705]: E0128 00:24:25.984205 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:24:25.984488 kubelet[3705]: E0128 00:24:25.984254 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:24:25.984517 kubelet[3705]: E0128 00:24:25.984344 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65d59647fb-65s4b_calico-system(e2c157ae-30f5-408a-abb4-61e3e5e3c10f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:25.985961 kubelet[3705]: E0128 00:24:25.985918 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:24:26.344953 systemd-networkd[1903]: vxlan.calico: Gained IPv6LL Jan 28 00:24:26.363348 kubelet[3705]: I0128 00:24:26.363314 3705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace8222a-010a-4f04-ad26-8197c0467a4d" path="/var/lib/kubelet/pods/ace8222a-010a-4f04-ad26-8197c0467a4d/volumes" Jan 28 00:24:26.555797 kubelet[3705]: E0128 00:24:26.555714 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:24:26.579000 audit[5412]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:26.579000 audit[5412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe9d482f0 a2=0 a3=1 items=0 ppid=3806 pid=5412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:26.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:26.583387 kubelet[3705]: I0128 00:24:26.583090 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q9kdk" podStartSLOduration=50.5830781 podStartE2EDuration="50.5830781s" podCreationTimestamp="2026-01-28 00:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:24:26.582586254 +0000 UTC m=+56.277183050" watchObservedRunningTime="2026-01-28 00:24:26.5830781 +0000 UTC m=+56.277674896" Jan 28 00:24:26.585000 audit[5412]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:26.585000 audit[5412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe9d482f0 a2=0 a3=1 items=0 ppid=3806 pid=5412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:26.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:26.596000 audit[5414]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:26.596000 audit[5414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffbbb9ab0 a2=0 a3=1 items=0 ppid=3806 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:26.596000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:26.600000 audit[5414]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:26.600000 audit[5414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffbbb9ab0 a2=0 a3=1 items=0 ppid=3806 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:26.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:26.984991 systemd-networkd[1903]: calia00a67d3107: Gained IPv6LL Jan 28 00:24:26.985812 systemd-networkd[1903]: calic1ad9b2955c: Gained IPv6LL Jan 28 00:24:27.362086 containerd[2198]: time="2026-01-28T00:24:27.361928185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gj4l2,Uid:15a1f57d-6d54-4a40-8a77-34f9abd91cfa,Namespace:kube-system,Attempt:0,}" Jan 28 00:24:27.450626 systemd-networkd[1903]: calif1c6511cd58: Link UP Jan 28 00:24:27.451523 systemd-networkd[1903]: calif1c6511cd58: Gained carrier Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.397 [INFO][5417] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0 coredns-668d6bf9bc- kube-system 15a1f57d-6d54-4a40-8a77-34f9abd91cfa 847 0 2026-01-28 00:23:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 coredns-668d6bf9bc-gj4l2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif1c6511cd58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-gj4l2" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.398 [INFO][5417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-gj4l2" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.415 [INFO][5429] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" HandleID="k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.415 [INFO][5429] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" HandleID="k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b8b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"coredns-668d6bf9bc-gj4l2", "timestamp":"2026-01-28 00:24:27.415178355 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.415 [INFO][5429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.415 [INFO][5429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.415 [INFO][5429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.421 [INFO][5429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.425 [INFO][5429] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.428 [INFO][5429] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.430 [INFO][5429] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.431 [INFO][5429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.432 [INFO][5429] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.433 [INFO][5429] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.439 [INFO][5429] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.445 [INFO][5429] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.195/26] block=192.168.11.192/26 handle="k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.445 [INFO][5429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.195/26] handle="k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.446 [INFO][5429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:27.468289 containerd[2198]: 2026-01-28 00:24:27.446 [INFO][5429] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.195/26] IPv6=[] ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" HandleID="k8s-pod-network.622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" Jan 28 00:24:27.468655 containerd[2198]: 2026-01-28 00:24:27.447 [INFO][5417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-gj4l2" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"15a1f57d-6d54-4a40-8a77-34f9abd91cfa", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"coredns-668d6bf9bc-gj4l2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1c6511cd58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:27.468655 containerd[2198]: 2026-01-28 00:24:27.448 [INFO][5417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.195/32] ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-gj4l2" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" Jan 28 00:24:27.468655 containerd[2198]: 2026-01-28 00:24:27.448 [INFO][5417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1c6511cd58 ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-gj4l2" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" Jan 28 00:24:27.468655 containerd[2198]: 2026-01-28 00:24:27.452 [INFO][5417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-gj4l2" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" Jan 28 00:24:27.468655 containerd[2198]: 2026-01-28 00:24:27.452 [INFO][5417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-gj4l2" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"15a1f57d-6d54-4a40-8a77-34f9abd91cfa", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef", Pod:"coredns-668d6bf9bc-gj4l2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.11.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1c6511cd58", MAC:"62:c9:37:9c:43:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:27.468655 containerd[2198]: 2026-01-28 00:24:27.464 [INFO][5417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" Namespace="kube-system" Pod="coredns-668d6bf9bc-gj4l2" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-coredns--668d6bf9bc--gj4l2-eth0" Jan 28 00:24:27.477000 audit[5446]: NETFILTER_CFG table=filter:133 family=2 entries=36 op=nft_register_chain pid=5446 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:27.477000 audit[5446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19156 a0=3 a1=ffffd5f58610 a2=0 a3=ffff88416fa8 items=0 ppid=5064 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.477000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:27.513502 containerd[2198]: time="2026-01-28T00:24:27.513467747Z" level=info msg="connecting to shim 622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef" address="unix:///run/containerd/s/dc9034251983e4d669863f61250e91e9312bc6f569c72cca7e679c1f53df6dd6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:27.529406 systemd[1]: Started cri-containerd-622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef.scope - libcontainer container 622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef. Jan 28 00:24:27.536000 audit: BPF prog-id=244 op=LOAD Jan 28 00:24:27.537000 audit: BPF prog-id=245 op=LOAD Jan 28 00:24:27.537000 audit[5466]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5455 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632326131346538633932633431636362393034396666363033356535 Jan 28 00:24:27.537000 audit: BPF prog-id=245 op=UNLOAD Jan 28 00:24:27.537000 audit[5466]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632326131346538633932633431636362393034396666363033356535 Jan 28 00:24:27.537000 audit: BPF prog-id=246 op=LOAD Jan 28 00:24:27.537000 audit[5466]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5455 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632326131346538633932633431636362393034396666363033356535 Jan 28 00:24:27.537000 audit: BPF prog-id=247 op=LOAD Jan 28 00:24:27.537000 audit[5466]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5455 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632326131346538633932633431636362393034396666363033356535 Jan 28 00:24:27.537000 audit: BPF prog-id=247 op=UNLOAD Jan 28 00:24:27.537000 audit[5466]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632326131346538633932633431636362393034396666363033356535 Jan 28 00:24:27.537000 audit: BPF prog-id=246 op=UNLOAD Jan 28 00:24:27.537000 audit[5466]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632326131346538633932633431636362393034396666363033356535 Jan 28 00:24:27.538000 audit: BPF prog-id=248 op=LOAD Jan 28 00:24:27.538000 audit[5466]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5455 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3632326131346538633932633431636362393034396666363033356535 Jan 28 00:24:27.568409 containerd[2198]: time="2026-01-28T00:24:27.568382363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gj4l2,Uid:15a1f57d-6d54-4a40-8a77-34f9abd91cfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef\"" Jan 28 00:24:27.571889 containerd[2198]: time="2026-01-28T00:24:27.571604188Z" level=info msg="CreateContainer within sandbox \"622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:24:27.597000 audit[5493]: NETFILTER_CFG table=filter:134 family=2 entries=17 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:27.597000 audit[5493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe66f4f90 a2=0 a3=1 items=0 ppid=3806 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:27.600000 audit[5493]: NETFILTER_CFG table=nat:135 family=2 entries=35 op=nft_register_chain pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:27.600000 audit[5493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe66f4f90 a2=0 a3=1 items=0 ppid=3806 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:27.623722 containerd[2198]: time="2026-01-28T00:24:27.623660189Z" level=info msg="Container 7dda5b7fef63f8810b362720de25dd02e071f6b4c07e2efd92a203bfd27de4a2: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:24:27.639467 containerd[2198]: time="2026-01-28T00:24:27.639399873Z" level=info msg="CreateContainer within sandbox \"622a14e8c92c41ccb9049ff6035e520b2787642019ea2d18f1c8af5d529cf5ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7dda5b7fef63f8810b362720de25dd02e071f6b4c07e2efd92a203bfd27de4a2\"" Jan 28 00:24:27.640274 containerd[2198]: time="2026-01-28T00:24:27.640233776Z" level=info msg="StartContainer for \"7dda5b7fef63f8810b362720de25dd02e071f6b4c07e2efd92a203bfd27de4a2\"" Jan 28 00:24:27.641837 containerd[2198]: time="2026-01-28T00:24:27.641558764Z" level=info msg="connecting to shim 7dda5b7fef63f8810b362720de25dd02e071f6b4c07e2efd92a203bfd27de4a2" address="unix:///run/containerd/s/dc9034251983e4d669863f61250e91e9312bc6f569c72cca7e679c1f53df6dd6" protocol=ttrpc version=3 Jan 28 00:24:27.658956 systemd[1]: Started cri-containerd-7dda5b7fef63f8810b362720de25dd02e071f6b4c07e2efd92a203bfd27de4a2.scope - libcontainer container 7dda5b7fef63f8810b362720de25dd02e071f6b4c07e2efd92a203bfd27de4a2. Jan 28 00:24:27.666000 audit: BPF prog-id=249 op=LOAD Jan 28 00:24:27.666000 audit: BPF prog-id=250 op=LOAD Jan 28 00:24:27.666000 audit[5494]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5455 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764646135623766656636336638383130623336323732306465323564 Jan 28 00:24:27.666000 audit: BPF prog-id=250 op=UNLOAD Jan 28 00:24:27.666000 audit[5494]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764646135623766656636336638383130623336323732306465323564 Jan 28 00:24:27.666000 audit: BPF prog-id=251 op=LOAD Jan 28 00:24:27.666000 audit[5494]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5455 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764646135623766656636336638383130623336323732306465323564 Jan 28 00:24:27.667000 audit: BPF prog-id=252 op=LOAD Jan 28 00:24:27.667000 audit[5494]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5455 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764646135623766656636336638383130623336323732306465323564 Jan 28 00:24:27.667000 audit: BPF prog-id=252 op=UNLOAD Jan 28 00:24:27.667000 audit[5494]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764646135623766656636336638383130623336323732306465323564 Jan 28 00:24:27.667000 audit: BPF prog-id=251 op=UNLOAD Jan 28 00:24:27.667000 audit[5494]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5455 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764646135623766656636336638383130623336323732306465323564 Jan 28 00:24:27.667000 audit: BPF prog-id=253 op=LOAD Jan 28 00:24:27.667000 audit[5494]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5455 pid=5494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:27.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764646135623766656636336638383130623336323732306465323564 Jan 28 00:24:27.686464 containerd[2198]: time="2026-01-28T00:24:27.686444827Z" level=info msg="StartContainer for \"7dda5b7fef63f8810b362720de25dd02e071f6b4c07e2efd92a203bfd27de4a2\" returns successfully" Jan 28 00:24:28.573293 kubelet[3705]: I0128 00:24:28.573234 3705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gj4l2" podStartSLOduration=52.573219537 podStartE2EDuration="52.573219537s" podCreationTimestamp="2026-01-28 00:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:24:28.572012512 +0000 UTC m=+58.266609308" watchObservedRunningTime="2026-01-28 00:24:28.573219537 +0000 UTC m=+58.267816333" Jan 28 00:24:28.639000 audit[5527]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:28.643710 kernel: kauditd_printk_skb: 337 callbacks suppressed Jan 28 00:24:28.643794 kernel: audit: type=1325 audit(1769559868.639:702): table=filter:136 family=2 entries=14 op=nft_register_rule pid=5527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:28.639000 audit[5527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd0c5b0c0 a2=0 a3=1 items=0 ppid=3806 pid=5527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:28.673915 kernel: audit: type=1300 audit(1769559868.639:702): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd0c5b0c0 a2=0 a3=1 items=0 ppid=3806 pid=5527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:28.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:28.684007 kernel: audit: type=1327 audit(1769559868.639:702): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:28.685000 audit[5527]: NETFILTER_CFG table=nat:137 family=2 entries=44 op=nft_register_rule pid=5527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:28.685000 audit[5527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd0c5b0c0 a2=0 a3=1 items=0 ppid=3806 pid=5527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:28.715774 kernel: audit: type=1325 audit(1769559868.685:703): table=nat:137 family=2 entries=44 op=nft_register_rule pid=5527 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:28.715840 kernel: audit: type=1300 audit(1769559868.685:703): arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd0c5b0c0 a2=0 a3=1 items=0 ppid=3806 pid=5527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:28.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:28.726576 kernel: audit: type=1327 audit(1769559868.685:703): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:29.417062 systemd-networkd[1903]: calif1c6511cd58: Gained IPv6LL Jan 28 00:24:29.704000 audit[5529]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:29.704000 audit[5529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc00bc1e0 a2=0 a3=1 items=0 ppid=3806 pid=5529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:29.733293 kernel: audit: type=1325 audit(1769559869.704:704): table=filter:138 family=2 entries=14 op=nft_register_rule pid=5529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:29.733357 kernel: audit: type=1300 audit(1769559869.704:704): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc00bc1e0 a2=0 a3=1 items=0 ppid=3806 pid=5529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:29.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:29.743416 kernel: audit: type=1327 audit(1769559869.704:704): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:29.745371 kernel: audit: type=1325 audit(1769559869.744:705): table=nat:139 family=2 entries=56 op=nft_register_chain pid=5529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:29.744000 audit[5529]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=5529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:29.744000 audit[5529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffc00bc1e0 a2=0 a3=1 items=0 ppid=3806 pid=5529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:29.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:33.361528 containerd[2198]: time="2026-01-28T00:24:33.361386117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxssn,Uid:37d401e3-39ef-4596-8144-de1aba842d50,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:33.461206 systemd-networkd[1903]: calic711241ba88: Link UP Jan 28 00:24:33.463588 systemd-networkd[1903]: calic711241ba88: Gained carrier Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.404 [INFO][5542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0 goldmane-666569f655- calico-system 37d401e3-39ef-4596-8144-de1aba842d50 855 0 2026-01-28 00:23:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 goldmane-666569f655-gxssn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic711241ba88 [] [] }} ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Namespace="calico-system" Pod="goldmane-666569f655-gxssn" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.404 [INFO][5542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Namespace="calico-system" Pod="goldmane-666569f655-gxssn" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.427 [INFO][5554] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" HandleID="k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.427 [INFO][5554] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" HandleID="k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400048e200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"goldmane-666569f655-gxssn", "timestamp":"2026-01-28 00:24:33.427285103 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.427 [INFO][5554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.427 [INFO][5554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.427 [INFO][5554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.433 [INFO][5554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.437 [INFO][5554] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.440 [INFO][5554] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.442 [INFO][5554] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.443 [INFO][5554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.443 [INFO][5554] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.445 [INFO][5554] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7 Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.450 [INFO][5554] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.457 [INFO][5554] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.196/26] block=192.168.11.192/26 handle="k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.457 [INFO][5554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.196/26] handle="k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.457 [INFO][5554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:33.478190 containerd[2198]: 2026-01-28 00:24:33.457 [INFO][5554] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.196/26] IPv6=[] ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" HandleID="k8s-pod-network.5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" Jan 28 00:24:33.478546 containerd[2198]: 2026-01-28 00:24:33.459 [INFO][5542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Namespace="calico-system" Pod="goldmane-666569f655-gxssn" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"37d401e3-39ef-4596-8144-de1aba842d50", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"goldmane-666569f655-gxssn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic711241ba88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:33.478546 containerd[2198]: 2026-01-28 00:24:33.459 [INFO][5542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.196/32] ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Namespace="calico-system" Pod="goldmane-666569f655-gxssn" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" Jan 28 00:24:33.478546 containerd[2198]: 2026-01-28 00:24:33.459 [INFO][5542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic711241ba88 ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Namespace="calico-system" Pod="goldmane-666569f655-gxssn" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" Jan 28 00:24:33.478546 containerd[2198]: 2026-01-28 00:24:33.461 [INFO][5542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Namespace="calico-system" Pod="goldmane-666569f655-gxssn" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" Jan 28 00:24:33.478546 containerd[2198]: 2026-01-28 00:24:33.463 [INFO][5542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Namespace="calico-system" Pod="goldmane-666569f655-gxssn" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"37d401e3-39ef-4596-8144-de1aba842d50", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7", Pod:"goldmane-666569f655-gxssn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.11.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic711241ba88", MAC:"06:fc:0e:bd:e2:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:33.478546 containerd[2198]: 2026-01-28 00:24:33.475 [INFO][5542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" Namespace="calico-system" Pod="goldmane-666569f655-gxssn" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-goldmane--666569f655--gxssn-eth0" Jan 28 00:24:33.489000 audit[5571]: NETFILTER_CFG table=filter:140 family=2 entries=58 op=nft_register_chain pid=5571 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:33.489000 audit[5571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30408 a0=3 a1=ffffd765fdb0 a2=0 a3=ffff80215fa8 items=0 ppid=5064 pid=5571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:33.489000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:33.517356 containerd[2198]: time="2026-01-28T00:24:33.517295490Z" level=info msg="connecting to shim 5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7" address="unix:///run/containerd/s/9d9c54da94e63c1249a2c0bbacbdc1121eeb7ee1b6cb5b3084a8e28cfbd70764" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:33.537971 systemd[1]: Started cri-containerd-5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7.scope - libcontainer container 5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7. Jan 28 00:24:33.545000 audit: BPF prog-id=254 op=LOAD Jan 28 00:24:33.546000 audit: BPF prog-id=255 op=LOAD Jan 28 00:24:33.546000 audit[5592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5581 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:33.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643565323665356436313832363666396463393663306466383862 Jan 28 00:24:33.546000 audit: BPF prog-id=255 op=UNLOAD Jan 28 00:24:33.546000 audit[5592]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5581 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:33.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643565323665356436313832363666396463393663306466383862 Jan 28 00:24:33.546000 audit: BPF prog-id=256 op=LOAD Jan 28 00:24:33.546000 audit[5592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5581 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:33.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643565323665356436313832363666396463393663306466383862 Jan 28 00:24:33.546000 audit: BPF prog-id=257 op=LOAD Jan 28 00:24:33.546000 audit[5592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5581 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:33.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643565323665356436313832363666396463393663306466383862 Jan 28 00:24:33.546000 audit: BPF prog-id=257 op=UNLOAD Jan 28 00:24:33.546000 audit[5592]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5581 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:33.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643565323665356436313832363666396463393663306466383862 Jan 28 00:24:33.546000 audit: BPF prog-id=256 op=UNLOAD Jan 28 00:24:33.546000 audit[5592]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5581 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:33.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643565323665356436313832363666396463393663306466383862 Jan 28 00:24:33.546000 audit: BPF prog-id=258 op=LOAD Jan 28 00:24:33.546000 audit[5592]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5581 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:33.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561643565323665356436313832363666396463393663306466383862 Jan 28 00:24:33.568702 containerd[2198]: time="2026-01-28T00:24:33.568648011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gxssn,Uid:37d401e3-39ef-4596-8144-de1aba842d50,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ad5e26e5d618266f9dc96c0df88b4a19e8112ee7c7b979885df9e0681d749f7\"" Jan 28 00:24:33.571590 containerd[2198]: time="2026-01-28T00:24:33.571564035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:24:33.879377 containerd[2198]: time="2026-01-28T00:24:33.879327981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:33.883316 containerd[2198]: time="2026-01-28T00:24:33.883284714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:24:33.883390 containerd[2198]: time="2026-01-28T00:24:33.883358060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:33.883744 kubelet[3705]: E0128 00:24:33.883528 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:24:33.883744 kubelet[3705]: E0128 00:24:33.883578 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:24:33.884902 kubelet[3705]: E0128 00:24:33.884724 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhxzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gxssn_calico-system(37d401e3-39ef-4596-8144-de1aba842d50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:33.885920 kubelet[3705]: E0128 00:24:33.885873 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:24:34.361534 containerd[2198]: time="2026-01-28T00:24:34.361437703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-n9l6g,Uid:d62fc2fd-8ccc-48de-b10b-98a0aa5672ea,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:34.456535 systemd-networkd[1903]: cali87ac4401fdb: Link UP Jan 28 00:24:34.457686 systemd-networkd[1903]: cali87ac4401fdb: Gained carrier Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.400 [INFO][5618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0 calico-apiserver-6db66f5c9f- calico-apiserver d62fc2fd-8ccc-48de-b10b-98a0aa5672ea 851 0 2026-01-28 00:23:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6db66f5c9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 calico-apiserver-6db66f5c9f-n9l6g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87ac4401fdb [] [] }} ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-n9l6g" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.400 [INFO][5618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-n9l6g" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.419 [INFO][5631] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" HandleID="k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.420 [INFO][5631] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" HandleID="k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"calico-apiserver-6db66f5c9f-n9l6g", "timestamp":"2026-01-28 00:24:34.419561219 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.420 [INFO][5631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.420 [INFO][5631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.420 [INFO][5631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.425 [INFO][5631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.429 [INFO][5631] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.432 [INFO][5631] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.433 [INFO][5631] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.435 [INFO][5631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.435 [INFO][5631] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.436 [INFO][5631] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4 Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.442 [INFO][5631] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.451 [INFO][5631] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.197/26] block=192.168.11.192/26 handle="k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.451 [INFO][5631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.197/26] handle="k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.451 [INFO][5631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:34.473720 containerd[2198]: 2026-01-28 00:24:34.451 [INFO][5631] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.197/26] IPv6=[] ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" HandleID="k8s-pod-network.3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" Jan 28 00:24:34.474108 containerd[2198]: 2026-01-28 00:24:34.453 [INFO][5618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-n9l6g" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0", GenerateName:"calico-apiserver-6db66f5c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d62fc2fd-8ccc-48de-b10b-98a0aa5672ea", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db66f5c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"calico-apiserver-6db66f5c9f-n9l6g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87ac4401fdb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:34.474108 containerd[2198]: 2026-01-28 00:24:34.453 [INFO][5618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.197/32] ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-n9l6g" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" Jan 28 00:24:34.474108 containerd[2198]: 2026-01-28 00:24:34.453 [INFO][5618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87ac4401fdb ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-n9l6g" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" Jan 28 00:24:34.474108 containerd[2198]: 2026-01-28 00:24:34.458 [INFO][5618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-n9l6g" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" Jan 28 00:24:34.474108 containerd[2198]: 2026-01-28 00:24:34.458 [INFO][5618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-n9l6g" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0", GenerateName:"calico-apiserver-6db66f5c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d62fc2fd-8ccc-48de-b10b-98a0aa5672ea", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db66f5c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4", Pod:"calico-apiserver-6db66f5c9f-n9l6g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87ac4401fdb", MAC:"c6:05:0c:52:f6:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:34.474108 containerd[2198]: 2026-01-28 00:24:34.470 [INFO][5618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-n9l6g" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--n9l6g-eth0" Jan 28 00:24:34.486000 audit[5644]: NETFILTER_CFG table=filter:141 family=2 entries=58 op=nft_register_chain pid=5644 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:34.491028 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 28 00:24:34.491095 kernel: audit: type=1325 audit(1769559874.486:715): table=filter:141 family=2 entries=58 op=nft_register_chain pid=5644 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:34.486000 audit[5644]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30568 a0=3 a1=ffffe1778140 a2=0 a3=ffffa3605fa8 items=0 ppid=5064 pid=5644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.520999 kernel: audit: type=1300 audit(1769559874.486:715): arch=c00000b7 syscall=211 success=yes exit=30568 a0=3 a1=ffffe1778140 a2=0 a3=ffffa3605fa8 items=0 ppid=5064 pid=5644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.486000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:34.532991 kernel: audit: type=1327 audit(1769559874.486:715): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:34.542744 containerd[2198]: time="2026-01-28T00:24:34.542715272Z" level=info msg="connecting to shim 3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4" address="unix:///run/containerd/s/67f61f2884951a29d05d384d8171c260a355e70eab83b2ad9d0c7c1d3a23a6e5" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:34.563981 systemd[1]: Started cri-containerd-3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4.scope - libcontainer container 3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4. Jan 28 00:24:34.573860 kubelet[3705]: E0128 00:24:34.573772 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:24:34.575000 audit: BPF prog-id=259 op=LOAD Jan 28 00:24:34.580000 audit: BPF prog-id=260 op=LOAD Jan 28 00:24:34.587991 kernel: audit: type=1334 audit(1769559874.575:716): prog-id=259 op=LOAD Jan 28 00:24:34.588067 kernel: audit: type=1334 audit(1769559874.580:717): prog-id=260 op=LOAD Jan 28 00:24:34.588085 kernel: audit: type=1300 audit(1769559874.580:717): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.580000 audit[5665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.626221 kernel: audit: type=1327 audit(1769559874.580:717): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.626255 kernel: audit: type=1334 audit(1769559874.580:718): prog-id=260 op=UNLOAD Jan 28 00:24:34.580000 audit: BPF prog-id=260 op=UNLOAD Jan 28 00:24:34.580000 audit[5665]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.671352 kernel: audit: type=1300 audit(1769559874.580:718): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.671445 kernel: audit: type=1327 audit(1769559874.580:718): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.580000 audit: BPF prog-id=261 op=LOAD Jan 28 00:24:34.580000 audit[5665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.604000 audit: BPF prog-id=262 op=LOAD Jan 28 00:24:34.604000 audit[5665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.604000 audit: BPF prog-id=262 op=UNLOAD Jan 28 00:24:34.604000 audit[5665]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.604000 audit: BPF prog-id=261 op=UNLOAD Jan 28 00:24:34.604000 audit[5665]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.604000 audit: BPF prog-id=263 op=LOAD Jan 28 00:24:34.604000 audit[5665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5654 pid=5665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373866646565336530376561613435343633313261613763616630 Jan 28 00:24:34.672000 audit[5686]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:34.672000 audit[5686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcfda91a0 a2=0 a3=1 items=0 ppid=3806 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:34.677000 audit[5686]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5686 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:34.677000 audit[5686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcfda91a0 a2=0 a3=1 items=0 ppid=3806 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:34.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:34.687782 containerd[2198]: time="2026-01-28T00:24:34.687747857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-n9l6g,Uid:d62fc2fd-8ccc-48de-b10b-98a0aa5672ea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3778fdee3e07eaa4546312aa7caf0144468c9c93a95f496c3713f52a21b228f4\"" Jan 28 00:24:34.689415 containerd[2198]: time="2026-01-28T00:24:34.689394070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:24:34.944753 containerd[2198]: time="2026-01-28T00:24:34.944554221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:34.948697 containerd[2198]: time="2026-01-28T00:24:34.948607061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:24:34.948697 containerd[2198]: time="2026-01-28T00:24:34.948662670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:34.948809 kubelet[3705]: E0128 00:24:34.948785 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:34.949476 kubelet[3705]: E0128 00:24:34.948839 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:34.949476 kubelet[3705]: E0128 00:24:34.948931 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5v98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db66f5c9f-n9l6g_calico-apiserver(d62fc2fd-8ccc-48de-b10b-98a0aa5672ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:34.950188 kubelet[3705]: E0128 00:24:34.950156 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:24:35.048952 systemd-networkd[1903]: calic711241ba88: Gained IPv6LL Jan 28 00:24:35.361864 containerd[2198]: time="2026-01-28T00:24:35.361809907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-tsvdr,Uid:d774fe09-bd7c-498b-91a3-e6c2f720c9c3,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:35.362173 containerd[2198]: time="2026-01-28T00:24:35.362150268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2sfv,Uid:f6d08a70-95be-4168-8a2f-3e965a6278e2,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:35.362295 containerd[2198]: time="2026-01-28T00:24:35.362232982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f45684b-zdqfk,Uid:11c0eb0b-ad29-4c1b-b01f-f65a107c6011,Namespace:calico-system,Attempt:0,}" Jan 28 00:24:35.512253 systemd-networkd[1903]: calidcce3df9668: Link UP Jan 28 00:24:35.514161 systemd-networkd[1903]: calidcce3df9668: Gained carrier Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.433 [INFO][5694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0 calico-apiserver-6db66f5c9f- calico-apiserver d774fe09-bd7c-498b-91a3-e6c2f720c9c3 856 0 2026-01-28 00:23:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6db66f5c9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 calico-apiserver-6db66f5c9f-tsvdr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidcce3df9668 [] [] }} ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-tsvdr" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.434 [INFO][5694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-tsvdr" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.461 [INFO][5731] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" HandleID="k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.461 [INFO][5731] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" HandleID="k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"calico-apiserver-6db66f5c9f-tsvdr", "timestamp":"2026-01-28 00:24:35.461043244 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.461 [INFO][5731] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.461 [INFO][5731] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.461 [INFO][5731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.466 [INFO][5731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.472 [INFO][5731] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.477 [INFO][5731] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.479 [INFO][5731] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.482 [INFO][5731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.482 [INFO][5731] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.484 [INFO][5731] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4 Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.489 [INFO][5731] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.504 [INFO][5731] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.198/26] block=192.168.11.192/26 handle="k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.504 [INFO][5731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.198/26] handle="k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.504 [INFO][5731] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:35.530373 containerd[2198]: 2026-01-28 00:24:35.504 [INFO][5731] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.198/26] IPv6=[] ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" HandleID="k8s-pod-network.aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" Jan 28 00:24:35.530785 containerd[2198]: 2026-01-28 00:24:35.507 [INFO][5694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-tsvdr" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0", GenerateName:"calico-apiserver-6db66f5c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d774fe09-bd7c-498b-91a3-e6c2f720c9c3", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db66f5c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"calico-apiserver-6db66f5c9f-tsvdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcce3df9668", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:35.530785 containerd[2198]: 2026-01-28 00:24:35.508 [INFO][5694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.198/32] ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-tsvdr" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" Jan 28 00:24:35.530785 containerd[2198]: 2026-01-28 00:24:35.508 [INFO][5694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcce3df9668 ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-tsvdr" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" Jan 28 00:24:35.530785 containerd[2198]: 2026-01-28 00:24:35.512 [INFO][5694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-tsvdr" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" Jan 28 00:24:35.530785 containerd[2198]: 2026-01-28 00:24:35.514 [INFO][5694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-tsvdr" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0", GenerateName:"calico-apiserver-6db66f5c9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"d774fe09-bd7c-498b-91a3-e6c2f720c9c3", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6db66f5c9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4", Pod:"calico-apiserver-6db66f5c9f-tsvdr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidcce3df9668", MAC:"6a:77:05:d6:75:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:35.530785 containerd[2198]: 2026-01-28 00:24:35.527 [INFO][5694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" Namespace="calico-apiserver" Pod="calico-apiserver-6db66f5c9f-tsvdr" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--6db66f5c9f--tsvdr-eth0" Jan 28 00:24:35.538000 audit[5763]: NETFILTER_CFG table=filter:144 family=2 entries=49 op=nft_register_chain pid=5763 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:35.538000 audit[5763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25436 a0=3 a1=ffffcdb0b790 a2=0 a3=ffff9cd2bfa8 items=0 ppid=5064 pid=5763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.538000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:35.560952 systemd-networkd[1903]: cali87ac4401fdb: Gained IPv6LL Jan 28 00:24:35.579909 kubelet[3705]: E0128 00:24:35.579848 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:24:35.580569 kubelet[3705]: E0128 00:24:35.580488 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:24:35.590532 containerd[2198]: time="2026-01-28T00:24:35.590339139Z" level=info msg="connecting to shim aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4" address="unix:///run/containerd/s/036ec0372dd5fa3671ebf19599c70813a0ea66672f24cff33a04b992d4e68527" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:35.621975 systemd[1]: Started cri-containerd-aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4.scope - libcontainer container aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4. Jan 28 00:24:35.638990 systemd-networkd[1903]: calic7ea02cee0d: Link UP Jan 28 00:24:35.639000 audit[5810]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5810 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:35.639000 audit[5810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee50b030 a2=0 a3=1 items=0 ppid=3806 pid=5810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:35.640101 systemd-networkd[1903]: calic7ea02cee0d: Gained carrier Jan 28 00:24:35.646000 audit[5810]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5810 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:35.646000 audit[5810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffee50b030 a2=0 a3=1 items=0 ppid=3806 pid=5810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:35.648000 audit: BPF prog-id=264 op=LOAD Jan 28 00:24:35.649000 audit: BPF prog-id=265 op=LOAD Jan 28 00:24:35.649000 audit[5789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5777 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161326562383930306433313866373437366665323866343835643135 Jan 28 00:24:35.649000 audit: BPF prog-id=265 op=UNLOAD Jan 28 00:24:35.649000 audit[5789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5777 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161326562383930306433313866373437366665323866343835643135 Jan 28 00:24:35.649000 audit: BPF prog-id=266 op=LOAD Jan 28 00:24:35.649000 audit[5789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5777 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161326562383930306433313866373437366665323866343835643135 Jan 28 00:24:35.649000 audit: BPF prog-id=267 op=LOAD Jan 28 00:24:35.649000 audit[5789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5777 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161326562383930306433313866373437366665323866343835643135 Jan 28 00:24:35.649000 audit: BPF prog-id=267 op=UNLOAD Jan 28 00:24:35.649000 audit[5789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5777 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161326562383930306433313866373437366665323866343835643135 Jan 28 00:24:35.649000 audit: BPF prog-id=266 op=UNLOAD Jan 28 00:24:35.649000 audit[5789]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5777 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161326562383930306433313866373437366665323866343835643135 Jan 28 00:24:35.649000 audit: BPF prog-id=268 op=LOAD Jan 28 00:24:35.649000 audit[5789]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5777 pid=5789 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161326562383930306433313866373437366665323866343835643135 Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.440 [INFO][5698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0 csi-node-driver- calico-system f6d08a70-95be-4168-8a2f-3e965a6278e2 736 0 2026-01-28 00:23:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 csi-node-driver-w2sfv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic7ea02cee0d [] [] }} ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Namespace="calico-system" Pod="csi-node-driver-w2sfv" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.440 [INFO][5698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Namespace="calico-system" Pod="csi-node-driver-w2sfv" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.470 [INFO][5737] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" HandleID="k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.471 [INFO][5737] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" HandleID="k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"csi-node-driver-w2sfv", "timestamp":"2026-01-28 00:24:35.470482273 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.471 [INFO][5737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.504 [INFO][5737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.504 [INFO][5737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.570 [INFO][5737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.578 [INFO][5737] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.588 [INFO][5737] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.595 [INFO][5737] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.603 [INFO][5737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.603 [INFO][5737] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.606 [INFO][5737] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66 Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.624 [INFO][5737] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.634 [INFO][5737] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.199/26] block=192.168.11.192/26 handle="k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.634 [INFO][5737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.199/26] handle="k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.634 [INFO][5737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:35.656429 containerd[2198]: 2026-01-28 00:24:35.634 [INFO][5737] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.199/26] IPv6=[] ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" HandleID="k8s-pod-network.1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" Jan 28 00:24:35.656813 containerd[2198]: 2026-01-28 00:24:35.637 [INFO][5698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Namespace="calico-system" Pod="csi-node-driver-w2sfv" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6d08a70-95be-4168-8a2f-3e965a6278e2", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"csi-node-driver-w2sfv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic7ea02cee0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:35.656813 containerd[2198]: 2026-01-28 00:24:35.637 [INFO][5698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.199/32] ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Namespace="calico-system" Pod="csi-node-driver-w2sfv" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" Jan 28 00:24:35.656813 containerd[2198]: 2026-01-28 00:24:35.637 [INFO][5698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7ea02cee0d ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Namespace="calico-system" Pod="csi-node-driver-w2sfv" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" Jan 28 00:24:35.656813 containerd[2198]: 2026-01-28 00:24:35.640 [INFO][5698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Namespace="calico-system" Pod="csi-node-driver-w2sfv" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" Jan 28 00:24:35.656813 containerd[2198]: 2026-01-28 00:24:35.641 [INFO][5698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Namespace="calico-system" Pod="csi-node-driver-w2sfv" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6d08a70-95be-4168-8a2f-3e965a6278e2", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66", Pod:"csi-node-driver-w2sfv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.11.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic7ea02cee0d", MAC:"7e:68:35:54:6b:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:35.656813 containerd[2198]: 2026-01-28 00:24:35.653 [INFO][5698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" Namespace="calico-system" Pod="csi-node-driver-w2sfv" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-csi--node--driver--w2sfv-eth0" Jan 28 00:24:35.672000 audit[5821]: NETFILTER_CFG table=filter:147 family=2 entries=40 op=nft_register_chain pid=5821 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:35.672000 audit[5821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20784 a0=3 a1=ffffd7a6d730 a2=0 a3=ffffa1d86fa8 items=0 ppid=5064 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.672000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:35.701690 containerd[2198]: time="2026-01-28T00:24:35.701657081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6db66f5c9f-tsvdr,Uid:d774fe09-bd7c-498b-91a3-e6c2f720c9c3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aa2eb8900d318f7476fe28f485d151a9d3be9dab6610ea88a7044a2922f32ea4\"" Jan 28 00:24:35.702740 containerd[2198]: time="2026-01-28T00:24:35.702708020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:24:35.713381 containerd[2198]: time="2026-01-28T00:24:35.713352982Z" level=info msg="connecting to shim 1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66" address="unix:///run/containerd/s/65dd9f8767fcdda8255f0e05bf2cda6cffae0887a11cfda0a223d98f75af7c44" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:35.726026 systemd-networkd[1903]: cali4a61d14993e: Link UP Jan 28 00:24:35.726945 systemd-networkd[1903]: cali4a61d14993e: Gained carrier Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.469 [INFO][5717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0 calico-kube-controllers-758f45684b- calico-system 11c0eb0b-ad29-4c1b-b01f-f65a107c6011 853 0 2026-01-28 00:23:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:758f45684b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 calico-kube-controllers-758f45684b-zdqfk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4a61d14993e [] [] }} ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Namespace="calico-system" Pod="calico-kube-controllers-758f45684b-zdqfk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.469 [INFO][5717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Namespace="calico-system" Pod="calico-kube-controllers-758f45684b-zdqfk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.490 [INFO][5751] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" HandleID="k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.490 [INFO][5751] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" HandleID="k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"calico-kube-controllers-758f45684b-zdqfk", "timestamp":"2026-01-28 00:24:35.490864787 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.491 [INFO][5751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.634 [INFO][5751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.634 [INFO][5751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.668 [INFO][5751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.679 [INFO][5751] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.692 [INFO][5751] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.693 [INFO][5751] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.697 [INFO][5751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.697 [INFO][5751] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.699 [INFO][5751] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901 Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.704 [INFO][5751] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.717 [INFO][5751] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.200/26] block=192.168.11.192/26 handle="k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.717 [INFO][5751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.200/26] handle="k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.717 [INFO][5751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:35.746281 containerd[2198]: 2026-01-28 00:24:35.717 [INFO][5751] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.200/26] IPv6=[] ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" HandleID="k8s-pod-network.453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" Jan 28 00:24:35.746892 containerd[2198]: 2026-01-28 00:24:35.723 [INFO][5717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Namespace="calico-system" Pod="calico-kube-controllers-758f45684b-zdqfk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0", GenerateName:"calico-kube-controllers-758f45684b-", Namespace:"calico-system", SelfLink:"", UID:"11c0eb0b-ad29-4c1b-b01f-f65a107c6011", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"758f45684b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"calico-kube-controllers-758f45684b-zdqfk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a61d14993e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:35.746892 containerd[2198]: 2026-01-28 00:24:35.723 [INFO][5717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.200/32] ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Namespace="calico-system" Pod="calico-kube-controllers-758f45684b-zdqfk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" Jan 28 00:24:35.746892 containerd[2198]: 2026-01-28 00:24:35.723 [INFO][5717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a61d14993e ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Namespace="calico-system" Pod="calico-kube-controllers-758f45684b-zdqfk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" Jan 28 00:24:35.746892 containerd[2198]: 2026-01-28 00:24:35.729 [INFO][5717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Namespace="calico-system" Pod="calico-kube-controllers-758f45684b-zdqfk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" Jan 28 00:24:35.746892 containerd[2198]: 2026-01-28 00:24:35.729 [INFO][5717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Namespace="calico-system" Pod="calico-kube-controllers-758f45684b-zdqfk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0", GenerateName:"calico-kube-controllers-758f45684b-", Namespace:"calico-system", SelfLink:"", UID:"11c0eb0b-ad29-4c1b-b01f-f65a107c6011", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"758f45684b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901", Pod:"calico-kube-controllers-758f45684b-zdqfk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.11.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4a61d14993e", MAC:"ba:fb:1f:73:f2:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:35.746892 containerd[2198]: 2026-01-28 00:24:35.743 [INFO][5717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" Namespace="calico-system" Pod="calico-kube-controllers-758f45684b-zdqfk" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--kube--controllers--758f45684b--zdqfk-eth0" Jan 28 00:24:35.746990 systemd[1]: Started cri-containerd-1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66.scope - libcontainer container 1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66. Jan 28 00:24:35.759000 audit: BPF prog-id=269 op=LOAD Jan 28 00:24:35.759000 audit[5875]: NETFILTER_CFG table=filter:148 family=2 entries=52 op=nft_register_chain pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:35.759000 audit[5875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffeb3b2a00 a2=0 a3=ffff9563cfa8 items=0 ppid=5064 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.759000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:35.760000 audit: BPF prog-id=270 op=LOAD Jan 28 00:24:35.760000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5837 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393337363638656166643331316164623330353531613238306264 Jan 28 00:24:35.760000 audit: BPF prog-id=270 op=UNLOAD Jan 28 00:24:35.760000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5837 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393337363638656166643331316164623330353531613238306264 Jan 28 00:24:35.760000 audit: BPF prog-id=271 op=LOAD Jan 28 00:24:35.760000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5837 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393337363638656166643331316164623330353531613238306264 Jan 28 00:24:35.760000 audit: BPF prog-id=272 op=LOAD Jan 28 00:24:35.760000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5837 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393337363638656166643331316164623330353531613238306264 Jan 28 00:24:35.760000 audit: BPF prog-id=272 op=UNLOAD Jan 28 00:24:35.760000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5837 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393337363638656166643331316164623330353531613238306264 Jan 28 00:24:35.761000 audit: BPF prog-id=271 op=UNLOAD Jan 28 00:24:35.761000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5837 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393337363638656166643331316164623330353531613238306264 Jan 28 00:24:35.761000 audit: BPF prog-id=273 op=LOAD Jan 28 00:24:35.761000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5837 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162393337363638656166643331316164623330353531613238306264 Jan 28 00:24:35.778836 containerd[2198]: time="2026-01-28T00:24:35.778796009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2sfv,Uid:f6d08a70-95be-4168-8a2f-3e965a6278e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b937668eafd311adb30551a280bd9be2deef25413b4ea671ab0bb8208822b66\"" Jan 28 00:24:35.803278 containerd[2198]: time="2026-01-28T00:24:35.803188382Z" level=info msg="connecting to shim 453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901" address="unix:///run/containerd/s/b9cad9d2f7725e0bf96f27b0778b1b561450a937eb33b20a648f87fe5fbc8f92" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:35.819954 systemd[1]: Started cri-containerd-453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901.scope - libcontainer container 453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901. Jan 28 00:24:35.831000 audit: BPF prog-id=274 op=LOAD Jan 28 00:24:35.832000 audit: BPF prog-id=275 op=LOAD Jan 28 00:24:35.832000 audit[5904]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5892 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435336338643136313462373465313538613561643736303064646338 Jan 28 00:24:35.832000 audit: BPF prog-id=275 op=UNLOAD Jan 28 00:24:35.832000 audit[5904]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5892 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435336338643136313462373465313538613561643736303064646338 Jan 28 00:24:35.832000 audit: BPF prog-id=276 op=LOAD Jan 28 00:24:35.832000 audit[5904]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5892 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435336338643136313462373465313538613561643736303064646338 Jan 28 00:24:35.832000 audit: BPF prog-id=277 op=LOAD Jan 28 00:24:35.832000 audit[5904]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5892 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435336338643136313462373465313538613561643736303064646338 Jan 28 00:24:35.832000 audit: BPF prog-id=277 op=UNLOAD Jan 28 00:24:35.832000 audit[5904]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5892 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435336338643136313462373465313538613561643736303064646338 Jan 28 00:24:35.832000 audit: BPF prog-id=276 op=UNLOAD Jan 28 00:24:35.832000 audit[5904]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5892 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435336338643136313462373465313538613561643736303064646338 Jan 28 00:24:35.832000 audit: BPF prog-id=278 op=LOAD Jan 28 00:24:35.832000 audit[5904]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5892 pid=5904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:35.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435336338643136313462373465313538613561643736303064646338 Jan 28 00:24:35.854757 containerd[2198]: time="2026-01-28T00:24:35.854723345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-758f45684b-zdqfk,Uid:11c0eb0b-ad29-4c1b-b01f-f65a107c6011,Namespace:calico-system,Attempt:0,} returns sandbox id \"453c8d1614b74e158a5ad7600ddc8c4183570fcba77402b33762fef7749b1901\"" Jan 28 00:24:36.035479 containerd[2198]: time="2026-01-28T00:24:36.035378322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:36.039305 containerd[2198]: time="2026-01-28T00:24:36.039209952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:24:36.039305 containerd[2198]: time="2026-01-28T00:24:36.039272841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:36.039559 kubelet[3705]: E0128 00:24:36.039501 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:36.039559 kubelet[3705]: E0128 00:24:36.039556 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:36.040059 kubelet[3705]: E0128 00:24:36.039743 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcvr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db66f5c9f-tsvdr_calico-apiserver(d774fe09-bd7c-498b-91a3-e6c2f720c9c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:36.040126 containerd[2198]: time="2026-01-28T00:24:36.039793327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:24:36.041549 kubelet[3705]: E0128 00:24:36.041511 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:24:36.362373 containerd[2198]: time="2026-01-28T00:24:36.362116611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cb8f7567-dv7tt,Uid:f8bf6ab5-ad5c-41b7-962e-92c73fabe079,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:24:36.396161 containerd[2198]: time="2026-01-28T00:24:36.396049317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:36.399280 containerd[2198]: time="2026-01-28T00:24:36.399225793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:24:36.399748 containerd[2198]: time="2026-01-28T00:24:36.399276202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:36.399917 kubelet[3705]: E0128 00:24:36.399849 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:24:36.399917 kubelet[3705]: E0128 00:24:36.399887 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:24:36.400100 kubelet[3705]: E0128 00:24:36.400058 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk8cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:36.400384 containerd[2198]: time="2026-01-28T00:24:36.400351127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:24:36.458167 systemd-networkd[1903]: cali6fe20e70c16: Link UP Jan 28 00:24:36.458446 systemd-networkd[1903]: cali6fe20e70c16: Gained carrier Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.395 [INFO][5930] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0 calico-apiserver-65cb8f7567- calico-apiserver f8bf6ab5-ad5c-41b7-962e-92c73fabe079 857 0 2026-01-28 00:23:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65cb8f7567 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.1.0-n-77eb5aaac5 calico-apiserver-65cb8f7567-dv7tt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6fe20e70c16 [] [] }} ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Namespace="calico-apiserver" Pod="calico-apiserver-65cb8f7567-dv7tt" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.395 [INFO][5930] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Namespace="calico-apiserver" Pod="calico-apiserver-65cb8f7567-dv7tt" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.416 [INFO][5941] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" HandleID="k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.416 [INFO][5941] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" HandleID="k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.1.0-n-77eb5aaac5", "pod":"calico-apiserver-65cb8f7567-dv7tt", "timestamp":"2026-01-28 00:24:36.416408375 +0000 UTC"}, Hostname:"ci-4547.1.0-n-77eb5aaac5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.416 [INFO][5941] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.416 [INFO][5941] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.416 [INFO][5941] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.1.0-n-77eb5aaac5' Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.425 [INFO][5941] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.429 [INFO][5941] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.432 [INFO][5941] ipam/ipam.go 511: Trying affinity for 192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.434 [INFO][5941] ipam/ipam.go 158: Attempting to load block cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.436 [INFO][5941] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.11.192/26 host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.436 [INFO][5941] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.11.192/26 handle="k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.437 [INFO][5941] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067 Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.444 [INFO][5941] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.11.192/26 handle="k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.453 [INFO][5941] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.11.201/26] block=192.168.11.192/26 handle="k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.453 [INFO][5941] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.11.201/26] handle="k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" host="ci-4547.1.0-n-77eb5aaac5" Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.453 [INFO][5941] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:24:36.473430 containerd[2198]: 2026-01-28 00:24:36.453 [INFO][5941] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.11.201/26] IPv6=[] ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" HandleID="k8s-pod-network.d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Workload="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" Jan 28 00:24:36.473784 containerd[2198]: 2026-01-28 00:24:36.454 [INFO][5930] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Namespace="calico-apiserver" Pod="calico-apiserver-65cb8f7567-dv7tt" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0", GenerateName:"calico-apiserver-65cb8f7567-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8bf6ab5-ad5c-41b7-962e-92c73fabe079", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65cb8f7567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"", Pod:"calico-apiserver-65cb8f7567-dv7tt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fe20e70c16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:36.473784 containerd[2198]: 2026-01-28 00:24:36.454 [INFO][5930] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.11.201/32] ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Namespace="calico-apiserver" Pod="calico-apiserver-65cb8f7567-dv7tt" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" Jan 28 00:24:36.473784 containerd[2198]: 2026-01-28 00:24:36.454 [INFO][5930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fe20e70c16 ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Namespace="calico-apiserver" Pod="calico-apiserver-65cb8f7567-dv7tt" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" Jan 28 00:24:36.473784 containerd[2198]: 2026-01-28 00:24:36.458 [INFO][5930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Namespace="calico-apiserver" Pod="calico-apiserver-65cb8f7567-dv7tt" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" Jan 28 00:24:36.473784 containerd[2198]: 2026-01-28 00:24:36.459 [INFO][5930] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Namespace="calico-apiserver" Pod="calico-apiserver-65cb8f7567-dv7tt" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0", GenerateName:"calico-apiserver-65cb8f7567-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8bf6ab5-ad5c-41b7-962e-92c73fabe079", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65cb8f7567", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.1.0-n-77eb5aaac5", ContainerID:"d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067", Pod:"calico-apiserver-65cb8f7567-dv7tt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.11.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fe20e70c16", MAC:"1e:da:a6:5d:58:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:24:36.473784 containerd[2198]: 2026-01-28 00:24:36.470 [INFO][5930] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" Namespace="calico-apiserver" Pod="calico-apiserver-65cb8f7567-dv7tt" WorkloadEndpoint="ci--4547.1.0--n--77eb5aaac5-k8s-calico--apiserver--65cb8f7567--dv7tt-eth0" Jan 28 00:24:36.481000 audit[5957]: NETFILTER_CFG table=filter:149 family=2 entries=57 op=nft_register_chain pid=5957 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:24:36.481000 audit[5957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27812 a0=3 a1=fffff3ea5660 a2=0 a3=ffffafd2dfa8 items=0 ppid=5064 pid=5957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.481000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:24:36.514641 containerd[2198]: time="2026-01-28T00:24:36.514608724Z" level=info msg="connecting to shim d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067" address="unix:///run/containerd/s/4e4620fead5a896ca8aecf33536f9bd585c0c0ff4ca7133396f650aefa190777" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:24:36.529960 systemd[1]: Started cri-containerd-d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067.scope - libcontainer container d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067. Jan 28 00:24:36.537000 audit: BPF prog-id=279 op=LOAD Jan 28 00:24:36.537000 audit: BPF prog-id=280 op=LOAD Jan 28 00:24:36.537000 audit[5978]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326263633261643765633136346638376363633562326263336636 Jan 28 00:24:36.537000 audit: BPF prog-id=280 op=UNLOAD Jan 28 00:24:36.537000 audit[5978]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326263633261643765633136346638376363633562326263336636 Jan 28 00:24:36.537000 audit: BPF prog-id=281 op=LOAD Jan 28 00:24:36.537000 audit[5978]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326263633261643765633136346638376363633562326263336636 Jan 28 00:24:36.538000 audit: BPF prog-id=282 op=LOAD Jan 28 00:24:36.538000 audit[5978]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326263633261643765633136346638376363633562326263336636 Jan 28 00:24:36.538000 audit: BPF prog-id=282 op=UNLOAD Jan 28 00:24:36.538000 audit[5978]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326263633261643765633136346638376363633562326263336636 Jan 28 00:24:36.538000 audit: BPF prog-id=281 op=UNLOAD Jan 28 00:24:36.538000 audit[5978]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326263633261643765633136346638376363633562326263336636 Jan 28 00:24:36.538000 audit: BPF prog-id=283 op=LOAD Jan 28 00:24:36.538000 audit[5978]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326263633261643765633136346638376363633562326263336636 Jan 28 00:24:36.559464 containerd[2198]: time="2026-01-28T00:24:36.559433374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65cb8f7567-dv7tt,Uid:f8bf6ab5-ad5c-41b7-962e-92c73fabe079,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d02bcc2ad7ec164f87ccc5b2bc3f69c71deba97cdd083b8279457d25c41fa067\"" Jan 28 00:24:36.584962 kubelet[3705]: E0128 00:24:36.584907 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:24:36.585400 kubelet[3705]: E0128 00:24:36.585355 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:24:36.616000 audit[6006]: NETFILTER_CFG table=filter:150 family=2 entries=14 op=nft_register_rule pid=6006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:36.616000 audit[6006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd93ec5d0 a2=0 a3=1 items=0 ppid=3806 pid=6006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:36.620000 audit[6006]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=6006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:36.620000 audit[6006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd93ec5d0 a2=0 a3=1 items=0 ppid=3806 pid=6006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:36.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:36.671769 containerd[2198]: time="2026-01-28T00:24:36.671736991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:36.676118 containerd[2198]: time="2026-01-28T00:24:36.676088891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:24:36.676302 containerd[2198]: time="2026-01-28T00:24:36.676191365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:36.676437 kubelet[3705]: E0128 00:24:36.676379 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:24:36.676470 kubelet[3705]: E0128 00:24:36.676438 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:24:36.676690 kubelet[3705]: E0128 00:24:36.676657 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lkvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-758f45684b-zdqfk_calico-system(11c0eb0b-ad29-4c1b-b01f-f65a107c6011): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:36.676897 containerd[2198]: time="2026-01-28T00:24:36.676706851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:24:36.678022 kubelet[3705]: E0128 00:24:36.677953 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:24:36.964804 containerd[2198]: time="2026-01-28T00:24:36.964689475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:36.967996 containerd[2198]: time="2026-01-28T00:24:36.967962394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:24:36.968166 containerd[2198]: time="2026-01-28T00:24:36.968041652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:36.968233 kubelet[3705]: E0128 00:24:36.968194 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:24:36.968271 kubelet[3705]: E0128 00:24:36.968240 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:24:36.969850 kubelet[3705]: E0128 00:24:36.968450 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk8cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:36.969981 containerd[2198]: time="2026-01-28T00:24:36.969801739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:24:36.970485 kubelet[3705]: E0128 00:24:36.970146 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:24:36.970224 systemd-networkd[1903]: calidcce3df9668: Gained IPv6LL Jan 28 00:24:37.225013 systemd-networkd[1903]: cali4a61d14993e: Gained IPv6LL Jan 28 00:24:37.245451 containerd[2198]: time="2026-01-28T00:24:37.245414931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:37.248835 containerd[2198]: time="2026-01-28T00:24:37.248787901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:24:37.248909 containerd[2198]: time="2026-01-28T00:24:37.248863951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:37.249053 kubelet[3705]: E0128 00:24:37.249017 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:37.249462 kubelet[3705]: E0128 00:24:37.249063 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:37.249462 kubelet[3705]: E0128 00:24:37.249169 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z95d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65cb8f7567-dv7tt_calico-apiserver(f8bf6ab5-ad5c-41b7-962e-92c73fabe079): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:37.250334 kubelet[3705]: E0128 00:24:37.250293 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:24:37.587257 kubelet[3705]: E0128 00:24:37.587196 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:24:37.589143 kubelet[3705]: E0128 00:24:37.589105 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:24:37.589425 kubelet[3705]: E0128 00:24:37.589404 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:24:37.589493 kubelet[3705]: E0128 00:24:37.589472 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:24:37.655000 audit[6010]: NETFILTER_CFG table=filter:152 family=2 entries=14 op=nft_register_rule pid=6010 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:37.655000 audit[6010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffe9db720 a2=0 a3=1 items=0 ppid=3806 pid=6010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:37.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:37.660000 audit[6010]: NETFILTER_CFG table=nat:153 family=2 entries=20 op=nft_register_rule pid=6010 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:24:37.660000 audit[6010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffe9db720 a2=0 a3=1 items=0 ppid=3806 pid=6010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:24:37.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:24:37.673001 systemd-networkd[1903]: calic7ea02cee0d: Gained IPv6LL Jan 28 00:24:38.313216 systemd-networkd[1903]: cali6fe20e70c16: Gained IPv6LL Jan 28 00:24:41.362955 containerd[2198]: time="2026-01-28T00:24:41.362846740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:24:41.650396 containerd[2198]: time="2026-01-28T00:24:41.650264910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:41.654478 containerd[2198]: time="2026-01-28T00:24:41.654442709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:24:41.654541 containerd[2198]: time="2026-01-28T00:24:41.654499062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:41.654679 kubelet[3705]: E0128 00:24:41.654634 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:24:41.654958 kubelet[3705]: E0128 00:24:41.654687 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:24:41.654958 kubelet[3705]: E0128 00:24:41.654862 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5b63b438c56a4f4382ff93bbd04b95ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sl6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65d59647fb-65s4b_calico-system(e2c157ae-30f5-408a-abb4-61e3e5e3c10f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:41.657397 containerd[2198]: time="2026-01-28T00:24:41.657288760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:24:41.938245 containerd[2198]: time="2026-01-28T00:24:41.937777714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:41.941845 containerd[2198]: time="2026-01-28T00:24:41.941759035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:24:41.941955 containerd[2198]: time="2026-01-28T00:24:41.941853526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:41.942102 kubelet[3705]: E0128 00:24:41.942068 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:24:41.942154 kubelet[3705]: E0128 00:24:41.942114 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:24:41.942263 kubelet[3705]: E0128 00:24:41.942201 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65d59647fb-65s4b_calico-system(e2c157ae-30f5-408a-abb4-61e3e5e3c10f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:41.943404 kubelet[3705]: E0128 00:24:41.943367 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:24:48.364428 containerd[2198]: time="2026-01-28T00:24:48.364241921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:24:48.710254 containerd[2198]: time="2026-01-28T00:24:48.710136340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:48.713893 containerd[2198]: time="2026-01-28T00:24:48.713859613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:24:48.713893 containerd[2198]: time="2026-01-28T00:24:48.713911734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:48.714238 kubelet[3705]: E0128 00:24:48.714149 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:24:48.714238 kubelet[3705]: E0128 00:24:48.714199 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:24:48.714690 kubelet[3705]: E0128 00:24:48.714624 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lkvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-758f45684b-zdqfk_calico-system(11c0eb0b-ad29-4c1b-b01f-f65a107c6011): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:48.715827 kubelet[3705]: E0128 00:24:48.715777 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:24:50.363971 containerd[2198]: time="2026-01-28T00:24:50.363921602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:24:50.677207 containerd[2198]: time="2026-01-28T00:24:50.677093887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:50.681548 containerd[2198]: time="2026-01-28T00:24:50.681515963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:24:50.681604 containerd[2198]: time="2026-01-28T00:24:50.681579813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:50.681839 kubelet[3705]: E0128 00:24:50.681710 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:24:50.681839 kubelet[3705]: E0128 00:24:50.681753 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:24:50.682118 kubelet[3705]: E0128 00:24:50.681938 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhxzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gxssn_calico-system(37d401e3-39ef-4596-8144-de1aba842d50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:50.682928 containerd[2198]: time="2026-01-28T00:24:50.682866497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:24:50.683035 kubelet[3705]: E0128 00:24:50.683010 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:24:50.958070 containerd[2198]: time="2026-01-28T00:24:50.957944201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:50.962036 containerd[2198]: time="2026-01-28T00:24:50.961977666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:24:50.962236 containerd[2198]: time="2026-01-28T00:24:50.962027851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:50.962349 kubelet[3705]: E0128 00:24:50.962311 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:50.962390 kubelet[3705]: E0128 00:24:50.962356 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:50.962600 kubelet[3705]: E0128 00:24:50.962556 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z95d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65cb8f7567-dv7tt_calico-apiserver(f8bf6ab5-ad5c-41b7-962e-92c73fabe079): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:50.963129 containerd[2198]: time="2026-01-28T00:24:50.963056272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:24:50.964067 kubelet[3705]: E0128 00:24:50.964036 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:24:51.358185 containerd[2198]: time="2026-01-28T00:24:51.358137512Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:51.361633 containerd[2198]: time="2026-01-28T00:24:51.361526399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:24:51.361633 containerd[2198]: time="2026-01-28T00:24:51.361584369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:51.364483 kubelet[3705]: E0128 00:24:51.364007 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:24:51.364483 kubelet[3705]: E0128 00:24:51.364049 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:24:51.364483 kubelet[3705]: E0128 00:24:51.364199 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk8cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:51.364670 containerd[2198]: time="2026-01-28T00:24:51.364618054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:24:51.656164 containerd[2198]: time="2026-01-28T00:24:51.655937782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:51.659616 containerd[2198]: time="2026-01-28T00:24:51.659536259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:24:51.659713 containerd[2198]: time="2026-01-28T00:24:51.659599924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:51.659742 kubelet[3705]: E0128 00:24:51.659713 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:51.659774 kubelet[3705]: E0128 00:24:51.659754 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:51.660967 kubelet[3705]: E0128 00:24:51.659931 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5v98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db66f5c9f-n9l6g_calico-apiserver(d62fc2fd-8ccc-48de-b10b-98a0aa5672ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:51.661090 containerd[2198]: time="2026-01-28T00:24:51.660151076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:24:51.661175 kubelet[3705]: E0128 00:24:51.661152 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:24:51.889650 containerd[2198]: time="2026-01-28T00:24:51.889610934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:51.893624 containerd[2198]: time="2026-01-28T00:24:51.893537585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:24:51.893624 containerd[2198]: time="2026-01-28T00:24:51.893585850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:51.893863 kubelet[3705]: E0128 00:24:51.893810 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:51.894200 kubelet[3705]: E0128 00:24:51.893870 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:24:51.894200 kubelet[3705]: E0128 00:24:51.894067 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcvr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db66f5c9f-tsvdr_calico-apiserver(d774fe09-bd7c-498b-91a3-e6c2f720c9c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:51.895021 containerd[2198]: time="2026-01-28T00:24:51.894980808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:24:51.896762 kubelet[3705]: E0128 00:24:51.896104 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:24:52.137792 containerd[2198]: time="2026-01-28T00:24:52.137621221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:24:52.140940 containerd[2198]: time="2026-01-28T00:24:52.140855069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:24:52.140940 containerd[2198]: time="2026-01-28T00:24:52.140896374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 00:24:52.141181 kubelet[3705]: E0128 00:24:52.141126 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:24:52.141181 kubelet[3705]: E0128 00:24:52.141183 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:24:52.141415 kubelet[3705]: E0128 00:24:52.141282 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk8cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:24:52.142683 kubelet[3705]: E0128 00:24:52.142572 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:24:54.364469 kubelet[3705]: E0128 00:24:54.364380 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:25:01.362408 kubelet[3705]: E0128 00:25:01.362336 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:25:04.362692 kubelet[3705]: E0128 00:25:04.362519 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:25:04.362692 kubelet[3705]: E0128 00:25:04.362654 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:25:05.362534 kubelet[3705]: E0128 00:25:05.362481 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:25:05.363662 kubelet[3705]: E0128 00:25:05.362848 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:25:05.364352 kubelet[3705]: E0128 00:25:05.364300 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:25:08.362984 containerd[2198]: time="2026-01-28T00:25:08.362944417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:25:08.658801 containerd[2198]: time="2026-01-28T00:25:08.658687115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:08.663931 containerd[2198]: time="2026-01-28T00:25:08.663895848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:25:08.663989 containerd[2198]: time="2026-01-28T00:25:08.663956145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:08.664129 kubelet[3705]: E0128 00:25:08.664100 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:25:08.664802 kubelet[3705]: E0128 00:25:08.664416 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:25:08.664802 kubelet[3705]: E0128 00:25:08.664518 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5b63b438c56a4f4382ff93bbd04b95ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sl6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65d59647fb-65s4b_calico-system(e2c157ae-30f5-408a-abb4-61e3e5e3c10f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:08.667000 containerd[2198]: time="2026-01-28T00:25:08.666783294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:25:08.957481 containerd[2198]: time="2026-01-28T00:25:08.957372132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:08.961127 containerd[2198]: time="2026-01-28T00:25:08.961011503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:25:08.961127 containerd[2198]: time="2026-01-28T00:25:08.961085769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:08.961249 kubelet[3705]: E0128 00:25:08.961206 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:25:08.961293 kubelet[3705]: E0128 00:25:08.961253 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:25:08.961370 kubelet[3705]: E0128 00:25:08.961339 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65d59647fb-65s4b_calico-system(e2c157ae-30f5-408a-abb4-61e3e5e3c10f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:08.963269 kubelet[3705]: E0128 00:25:08.963238 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:25:14.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.33:22-10.200.16.10:57578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:14.094900 systemd[1]: Started sshd@7-10.200.20.33:22-10.200.16.10:57578.service - OpenSSH per-connection server daemon (10.200.16.10:57578). Jan 28 00:25:14.098557 kernel: kauditd_printk_skb: 139 callbacks suppressed Jan 28 00:25:14.098736 kernel: audit: type=1130 audit(1769559914.094:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.33:22-10.200.16.10:57578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:14.539000 audit[6065]: USER_ACCT pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.540722 sshd[6065]: Accepted publickey for core from 10.200.16.10 port 57578 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:14.556000 audit[6065]: CRED_ACQ pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.558558 sshd-session[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:14.571147 kernel: audit: type=1101 audit(1769559914.539:769): pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.571212 kernel: audit: type=1103 audit(1769559914.556:770): pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.581414 kernel: audit: type=1006 audit(1769559914.556:771): pid=6065 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 00:25:14.556000 audit[6065]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9afed40 a2=3 a3=0 items=0 ppid=1 pid=6065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:14.585748 systemd-logind[2162]: New session 11 of user core. Jan 28 00:25:14.600043 kernel: audit: type=1300 audit(1769559914.556:771): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9afed40 a2=3 a3=0 items=0 ppid=1 pid=6065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:14.556000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:14.606499 kernel: audit: type=1327 audit(1769559914.556:771): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:14.606980 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 00:25:14.609000 audit[6065]: USER_START pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.619000 audit[6069]: CRED_ACQ pid=6069 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.644023 kernel: audit: type=1105 audit(1769559914.609:772): pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.644090 kernel: audit: type=1103 audit(1769559914.619:773): pid=6069 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.848971 sshd[6069]: Connection closed by 10.200.16.10 port 57578 Jan 28 00:25:14.849793 sshd-session[6065]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:14.850000 audit[6065]: USER_END pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.858164 systemd[1]: sshd@7-10.200.20.33:22-10.200.16.10:57578.service: Deactivated successfully. Jan 28 00:25:14.859630 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 00:25:14.862617 systemd-logind[2162]: Session 11 logged out. Waiting for processes to exit. Jan 28 00:25:14.863883 systemd-logind[2162]: Removed session 11. Jan 28 00:25:14.850000 audit[6065]: CRED_DISP pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.887856 kernel: audit: type=1106 audit(1769559914.850:774): pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.888071 kernel: audit: type=1104 audit(1769559914.850:775): pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:14.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.33:22-10.200.16.10:57578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:15.364367 containerd[2198]: time="2026-01-28T00:25:15.364321671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:25:15.671549 containerd[2198]: time="2026-01-28T00:25:15.671382579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:15.675014 containerd[2198]: time="2026-01-28T00:25:15.674907666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:25:15.675014 containerd[2198]: time="2026-01-28T00:25:15.674934739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:15.675133 kubelet[3705]: E0128 00:25:15.675084 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:25:15.675415 kubelet[3705]: E0128 00:25:15.675136 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:25:15.675415 kubelet[3705]: E0128 00:25:15.675233 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lkvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-758f45684b-zdqfk_calico-system(11c0eb0b-ad29-4c1b-b01f-f65a107c6011): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:15.676472 kubelet[3705]: E0128 00:25:15.676430 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:25:16.362762 containerd[2198]: time="2026-01-28T00:25:16.362657221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:25:16.607512 containerd[2198]: time="2026-01-28T00:25:16.607405601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:16.611346 containerd[2198]: time="2026-01-28T00:25:16.611261872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:25:16.611480 containerd[2198]: time="2026-01-28T00:25:16.611318674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:16.611529 kubelet[3705]: E0128 00:25:16.611455 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:25:16.611529 kubelet[3705]: E0128 00:25:16.611505 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:25:16.611649 kubelet[3705]: E0128 00:25:16.611604 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z95d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65cb8f7567-dv7tt_calico-apiserver(f8bf6ab5-ad5c-41b7-962e-92c73fabe079): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:16.613120 kubelet[3705]: E0128 00:25:16.613026 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:25:17.363335 containerd[2198]: time="2026-01-28T00:25:17.363284515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:25:17.648454 containerd[2198]: time="2026-01-28T00:25:17.647947361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:17.651934 containerd[2198]: time="2026-01-28T00:25:17.651820882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:25:17.651934 containerd[2198]: time="2026-01-28T00:25:17.651896980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:17.652196 kubelet[3705]: E0128 00:25:17.652143 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:25:17.652507 kubelet[3705]: E0128 00:25:17.652204 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:25:17.652507 kubelet[3705]: E0128 00:25:17.652308 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcvr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db66f5c9f-tsvdr_calico-apiserver(d774fe09-bd7c-498b-91a3-e6c2f720c9c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:17.653505 kubelet[3705]: E0128 00:25:17.653469 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:25:18.364565 containerd[2198]: time="2026-01-28T00:25:18.364168784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:25:18.634463 containerd[2198]: time="2026-01-28T00:25:18.634224181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:18.637907 containerd[2198]: time="2026-01-28T00:25:18.637764564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:25:18.637907 containerd[2198]: time="2026-01-28T00:25:18.637860487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:18.638045 kubelet[3705]: E0128 00:25:18.637990 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:25:18.638045 kubelet[3705]: E0128 00:25:18.638035 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:25:18.638860 kubelet[3705]: E0128 00:25:18.638211 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhxzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gxssn_calico-system(37d401e3-39ef-4596-8144-de1aba842d50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:18.638973 containerd[2198]: time="2026-01-28T00:25:18.638387421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:25:18.640646 kubelet[3705]: E0128 00:25:18.640218 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:25:18.939294 containerd[2198]: time="2026-01-28T00:25:18.939180134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:18.944540 containerd[2198]: time="2026-01-28T00:25:18.944448291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:25:18.944540 containerd[2198]: time="2026-01-28T00:25:18.944495909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:18.944825 kubelet[3705]: E0128 00:25:18.944741 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:25:18.945206 kubelet[3705]: E0128 00:25:18.944809 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:25:18.945206 kubelet[3705]: E0128 00:25:18.945159 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk8cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:18.946962 containerd[2198]: time="2026-01-28T00:25:18.946938182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:25:19.199912 containerd[2198]: time="2026-01-28T00:25:19.197724829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:19.203442 containerd[2198]: time="2026-01-28T00:25:19.203322067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:25:19.203442 containerd[2198]: time="2026-01-28T00:25:19.203405006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:19.204094 kubelet[3705]: E0128 00:25:19.203635 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:25:19.204094 kubelet[3705]: E0128 00:25:19.203682 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:25:19.204094 kubelet[3705]: E0128 00:25:19.203770 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk8cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:19.205199 kubelet[3705]: E0128 00:25:19.205167 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:25:19.363435 containerd[2198]: time="2026-01-28T00:25:19.363396560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:25:19.644751 containerd[2198]: time="2026-01-28T00:25:19.644702348Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:19.648431 containerd[2198]: time="2026-01-28T00:25:19.648398648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:25:19.648556 containerd[2198]: time="2026-01-28T00:25:19.648474762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:19.648728 kubelet[3705]: E0128 00:25:19.648652 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:25:19.648728 kubelet[3705]: E0128 00:25:19.648715 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:25:19.648935 kubelet[3705]: E0128 00:25:19.648905 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5v98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db66f5c9f-n9l6g_calico-apiserver(d62fc2fd-8ccc-48de-b10b-98a0aa5672ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:19.651051 kubelet[3705]: E0128 00:25:19.651003 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:25:19.942043 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:25:19.942132 kernel: audit: type=1130 audit(1769559919.937:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.33:22-10.200.16.10:39838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:19.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.33:22-10.200.16.10:39838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:19.937991 systemd[1]: Started sshd@8-10.200.20.33:22-10.200.16.10:39838.service - OpenSSH per-connection server daemon (10.200.16.10:39838). Jan 28 00:25:20.377031 sshd[6084]: Accepted publickey for core from 10.200.16.10 port 39838 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:20.376000 audit[6084]: USER_ACCT pid=6084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.383295 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:20.381000 audit[6084]: CRED_ACQ pid=6084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.420375 kernel: audit: type=1101 audit(1769559920.376:778): pid=6084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.420467 kernel: audit: type=1103 audit(1769559920.381:779): pid=6084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.432916 kernel: audit: type=1006 audit(1769559920.381:780): pid=6084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 28 00:25:20.381000 audit[6084]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcfa8e870 a2=3 a3=0 items=0 ppid=1 pid=6084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:20.453664 kernel: audit: type=1300 audit(1769559920.381:780): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcfa8e870 a2=3 a3=0 items=0 ppid=1 pid=6084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:20.381000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:20.462604 kernel: audit: type=1327 audit(1769559920.381:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:20.463033 systemd-logind[2162]: New session 12 of user core. Jan 28 00:25:20.466069 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 00:25:20.468000 audit[6084]: USER_START pid=6084 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.494000 audit[6088]: CRED_ACQ pid=6088 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.510904 kernel: audit: type=1105 audit(1769559920.468:781): pid=6084 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.510991 kernel: audit: type=1103 audit(1769559920.494:782): pid=6088 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.717011 sshd[6088]: Connection closed by 10.200.16.10 port 39838 Jan 28 00:25:20.717742 sshd-session[6084]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:20.719000 audit[6084]: USER_END pid=6084 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.723079 systemd[1]: sshd@8-10.200.20.33:22-10.200.16.10:39838.service: Deactivated successfully. Jan 28 00:25:20.725262 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 00:25:20.719000 audit[6084]: CRED_DISP pid=6084 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.760834 kernel: audit: type=1106 audit(1769559920.719:783): pid=6084 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.760918 kernel: audit: type=1104 audit(1769559920.719:784): pid=6084 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:20.762110 systemd-logind[2162]: Session 12 logged out. Waiting for processes to exit. Jan 28 00:25:20.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.33:22-10.200.16.10:39838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:20.764212 systemd-logind[2162]: Removed session 12. Jan 28 00:25:21.362668 kubelet[3705]: E0128 00:25:21.362597 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:25:25.807154 systemd[1]: Started sshd@9-10.200.20.33:22-10.200.16.10:39850.service - OpenSSH per-connection server daemon (10.200.16.10:39850). Jan 28 00:25:25.828116 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:25:25.828151 kernel: audit: type=1130 audit(1769559925.805:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.33:22-10.200.16.10:39850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:25.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.33:22-10.200.16.10:39850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:26.245000 audit[6126]: USER_ACCT pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.248992 sshd[6126]: Accepted publickey for core from 10.200.16.10 port 39850 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:26.263000 audit[6126]: CRED_ACQ pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.266316 sshd-session[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:26.280491 kernel: audit: type=1101 audit(1769559926.245:787): pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.280560 kernel: audit: type=1103 audit(1769559926.263:788): pid=6126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.290419 kernel: audit: type=1006 audit(1769559926.263:789): pid=6126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 28 00:25:26.263000 audit[6126]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6e2a0e0 a2=3 a3=0 items=0 ppid=1 pid=6126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:26.307371 kernel: audit: type=1300 audit(1769559926.263:789): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6e2a0e0 a2=3 a3=0 items=0 ppid=1 pid=6126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:26.263000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:26.314080 kernel: audit: type=1327 audit(1769559926.263:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:26.316878 systemd-logind[2162]: New session 13 of user core. Jan 28 00:25:26.324945 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 00:25:26.325000 audit[6126]: USER_START pid=6126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.346000 audit[6130]: CRED_ACQ pid=6130 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.361649 kernel: audit: type=1105 audit(1769559926.325:790): pid=6126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.361724 kernel: audit: type=1103 audit(1769559926.346:791): pid=6130 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.551631 sshd[6130]: Connection closed by 10.200.16.10 port 39850 Jan 28 00:25:26.552556 sshd-session[6126]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:26.552000 audit[6126]: USER_END pid=6126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.558445 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 00:25:26.561699 systemd[1]: sshd@9-10.200.20.33:22-10.200.16.10:39850.service: Deactivated successfully. Jan 28 00:25:26.553000 audit[6126]: CRED_DISP pid=6126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.589318 kernel: audit: type=1106 audit(1769559926.552:792): pid=6126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.589384 kernel: audit: type=1104 audit(1769559926.553:793): pid=6126 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:26.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.33:22-10.200.16.10:39850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:26.591024 systemd-logind[2162]: Session 13 logged out. Waiting for processes to exit. Jan 28 00:25:26.592685 systemd-logind[2162]: Removed session 13. Jan 28 00:25:26.631428 systemd[1]: Started sshd@10-10.200.20.33:22-10.200.16.10:39854.service - OpenSSH per-connection server daemon (10.200.16.10:39854). Jan 28 00:25:26.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.33:22-10.200.16.10:39854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:27.016000 audit[6143]: USER_ACCT pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.018665 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 39854 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:27.017000 audit[6143]: CRED_ACQ pid=6143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.018000 audit[6143]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd5352500 a2=3 a3=0 items=0 ppid=1 pid=6143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:27.018000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:27.020711 sshd-session[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:27.024752 systemd-logind[2162]: New session 14 of user core. Jan 28 00:25:27.033959 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 00:25:27.034000 audit[6143]: USER_START pid=6143 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.035000 audit[6147]: CRED_ACQ pid=6147 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.304154 sshd[6147]: Connection closed by 10.200.16.10 port 39854 Jan 28 00:25:27.305006 sshd-session[6143]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:27.305000 audit[6143]: USER_END pid=6143 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.305000 audit[6143]: CRED_DISP pid=6143 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.309383 systemd[1]: sshd@10-10.200.20.33:22-10.200.16.10:39854.service: Deactivated successfully. Jan 28 00:25:27.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.33:22-10.200.16.10:39854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:27.313316 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 00:25:27.315369 systemd-logind[2162]: Session 14 logged out. Waiting for processes to exit. Jan 28 00:25:27.317838 systemd-logind[2162]: Removed session 14. Jan 28 00:25:27.361929 kubelet[3705]: E0128 00:25:27.361887 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:25:27.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.33:22-10.200.16.10:39868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:27.391899 systemd[1]: Started sshd@11-10.200.20.33:22-10.200.16.10:39868.service - OpenSSH per-connection server daemon (10.200.16.10:39868). Jan 28 00:25:27.809000 audit[6157]: USER_ACCT pid=6157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.811714 sshd[6157]: Accepted publickey for core from 10.200.16.10 port 39868 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:27.810000 audit[6157]: CRED_ACQ pid=6157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.810000 audit[6157]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff0b4920 a2=3 a3=0 items=0 ppid=1 pid=6157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:27.810000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:27.813464 sshd-session[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:27.818864 systemd-logind[2162]: New session 15 of user core. Jan 28 00:25:27.831944 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 00:25:27.832000 audit[6157]: USER_START pid=6157 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:27.833000 audit[6161]: CRED_ACQ pid=6161 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:28.090238 sshd[6161]: Connection closed by 10.200.16.10 port 39868 Jan 28 00:25:28.090511 sshd-session[6157]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:28.091000 audit[6157]: USER_END pid=6157 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:28.091000 audit[6157]: CRED_DISP pid=6157 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:28.094655 systemd-logind[2162]: Session 15 logged out. Waiting for processes to exit. Jan 28 00:25:28.095524 systemd[1]: sshd@11-10.200.20.33:22-10.200.16.10:39868.service: Deactivated successfully. Jan 28 00:25:28.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.33:22-10.200.16.10:39868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:28.097510 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 00:25:28.099968 systemd-logind[2162]: Removed session 15. Jan 28 00:25:29.362461 kubelet[3705]: E0128 00:25:29.362407 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:25:31.362441 kubelet[3705]: E0128 00:25:31.362397 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:25:33.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.33:22-10.200.16.10:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:33.184973 systemd[1]: Started sshd@12-10.200.20.33:22-10.200.16.10:59962.service - OpenSSH per-connection server daemon (10.200.16.10:59962). Jan 28 00:25:33.187984 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 00:25:33.188047 kernel: audit: type=1130 audit(1769559933.184:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.33:22-10.200.16.10:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:33.363131 kubelet[3705]: E0128 00:25:33.363095 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:25:33.363485 kubelet[3705]: E0128 00:25:33.363299 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:25:33.617000 audit[6178]: USER_ACCT pid=6178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.638977 sshd[6178]: Accepted publickey for core from 10.200.16.10 port 59962 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:33.640595 sshd-session[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:33.639000 audit[6178]: CRED_ACQ pid=6178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.659214 kernel: audit: type=1101 audit(1769559933.617:814): pid=6178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.659278 kernel: audit: type=1103 audit(1769559933.639:815): pid=6178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.670880 kernel: audit: type=1006 audit(1769559933.639:816): pid=6178 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 28 00:25:33.673118 systemd-logind[2162]: New session 16 of user core. Jan 28 00:25:33.639000 audit[6178]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3c6a560 a2=3 a3=0 items=0 ppid=1 pid=6178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:33.691853 kernel: audit: type=1300 audit(1769559933.639:816): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3c6a560 a2=3 a3=0 items=0 ppid=1 pid=6178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:33.694989 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 00:25:33.639000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:33.703932 kernel: audit: type=1327 audit(1769559933.639:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:33.704000 audit[6178]: USER_START pid=6178 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.727790 kernel: audit: type=1105 audit(1769559933.704:817): pid=6178 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.706000 audit[6183]: CRED_ACQ pid=6183 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.743248 kernel: audit: type=1103 audit(1769559933.706:818): pid=6183 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.912112 sshd[6183]: Connection closed by 10.200.16.10 port 59962 Jan 28 00:25:33.913520 sshd-session[6178]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:33.915000 audit[6178]: USER_END pid=6178 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.918711 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 00:25:33.921392 systemd[1]: sshd@12-10.200.20.33:22-10.200.16.10:59962.service: Deactivated successfully. Jan 28 00:25:33.915000 audit[6178]: CRED_DISP pid=6178 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.937021 systemd-logind[2162]: Session 16 logged out. Waiting for processes to exit. Jan 28 00:25:33.940062 systemd-logind[2162]: Removed session 16. Jan 28 00:25:33.950512 kernel: audit: type=1106 audit(1769559933.915:819): pid=6178 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.950590 kernel: audit: type=1104 audit(1769559933.915:820): pid=6178 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:33.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.33:22-10.200.16.10:59962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:34.364935 kubelet[3705]: E0128 00:25:34.364872 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:25:36.366716 kubelet[3705]: E0128 00:25:36.366621 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:25:39.015309 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:25:39.015426 kernel: audit: type=1130 audit(1769559938.996:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.33:22-10.200.16.10:59970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:38.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.33:22-10.200.16.10:59970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:38.997332 systemd[1]: Started sshd@13-10.200.20.33:22-10.200.16.10:59970.service - OpenSSH per-connection server daemon (10.200.16.10:59970). Jan 28 00:25:39.362238 kubelet[3705]: E0128 00:25:39.362169 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:25:39.400000 audit[6198]: USER_ACCT pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.417192 sshd[6198]: Accepted publickey for core from 10.200.16.10 port 59970 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:39.417175 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:39.415000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.432014 kernel: audit: type=1101 audit(1769559939.400:823): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.432079 kernel: audit: type=1103 audit(1769559939.415:824): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.441990 kernel: audit: type=1006 audit(1769559939.415:825): pid=6198 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 28 00:25:39.415000 audit[6198]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9d3c0a0 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:39.461034 kernel: audit: type=1300 audit(1769559939.415:825): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9d3c0a0 a2=3 a3=0 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:39.415000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:39.467000 kernel: audit: type=1327 audit(1769559939.415:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:39.467954 systemd-logind[2162]: New session 17 of user core. Jan 28 00:25:39.473975 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 00:25:39.475000 audit[6198]: USER_START pid=6198 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.496000 audit[6202]: CRED_ACQ pid=6202 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.511946 kernel: audit: type=1105 audit(1769559939.475:826): pid=6198 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.512013 kernel: audit: type=1103 audit(1769559939.496:827): pid=6202 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.693297 sshd[6202]: Connection closed by 10.200.16.10 port 59970 Jan 28 00:25:39.694078 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:39.696000 audit[6198]: USER_END pid=6198 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.701271 systemd-logind[2162]: Session 17 logged out. Waiting for processes to exit. Jan 28 00:25:39.710296 systemd[1]: sshd@13-10.200.20.33:22-10.200.16.10:59970.service: Deactivated successfully. Jan 28 00:25:39.713627 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 00:25:39.717218 systemd-logind[2162]: Removed session 17. Jan 28 00:25:39.696000 audit[6198]: CRED_DISP pid=6198 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.733401 kernel: audit: type=1106 audit(1769559939.696:828): pid=6198 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.733466 kernel: audit: type=1104 audit(1769559939.696:829): pid=6198 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:39.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.33:22-10.200.16.10:59970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:44.363897 kubelet[3705]: E0128 00:25:44.363685 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:25:44.778667 systemd[1]: Started sshd@14-10.200.20.33:22-10.200.16.10:36240.service - OpenSSH per-connection server daemon (10.200.16.10:36240). Jan 28 00:25:44.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.33:22-10.200.16.10:36240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:44.782444 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:25:44.782503 kernel: audit: type=1130 audit(1769559944.777:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.33:22-10.200.16.10:36240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:45.185000 audit[6213]: USER_ACCT pid=6213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.186984 sshd[6213]: Accepted publickey for core from 10.200.16.10 port 36240 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:45.203000 audit[6213]: CRED_ACQ pid=6213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.209574 sshd-session[6213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:45.218712 kernel: audit: type=1101 audit(1769559945.185:832): pid=6213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.218766 kernel: audit: type=1103 audit(1769559945.203:833): pid=6213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.228910 kernel: audit: type=1006 audit(1769559945.208:834): pid=6213 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 28 00:25:45.208000 audit[6213]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe3e28e40 a2=3 a3=0 items=0 ppid=1 pid=6213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:45.246050 kernel: audit: type=1300 audit(1769559945.208:834): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe3e28e40 a2=3 a3=0 items=0 ppid=1 pid=6213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:45.208000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:45.249565 systemd-logind[2162]: New session 18 of user core. Jan 28 00:25:45.254600 kernel: audit: type=1327 audit(1769559945.208:834): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:45.256984 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 00:25:45.259000 audit[6213]: USER_START pid=6213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.261000 audit[6217]: CRED_ACQ pid=6217 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.293880 kernel: audit: type=1105 audit(1769559945.259:835): pid=6213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.293945 kernel: audit: type=1103 audit(1769559945.261:836): pid=6217 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.362995 kubelet[3705]: E0128 00:25:45.362931 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:25:45.475910 sshd[6217]: Connection closed by 10.200.16.10 port 36240 Jan 28 00:25:45.475997 sshd-session[6213]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:45.477000 audit[6213]: USER_END pid=6213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.480161 systemd-logind[2162]: Session 18 logged out. Waiting for processes to exit. Jan 28 00:25:45.481650 systemd[1]: sshd@14-10.200.20.33:22-10.200.16.10:36240.service: Deactivated successfully. Jan 28 00:25:45.486244 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 00:25:45.489003 systemd-logind[2162]: Removed session 18. Jan 28 00:25:45.477000 audit[6213]: CRED_DISP pid=6213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.514341 kernel: audit: type=1106 audit(1769559945.477:837): pid=6213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.514415 kernel: audit: type=1104 audit(1769559945.477:838): pid=6213 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:45.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.33:22-10.200.16.10:36240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:46.363307 kubelet[3705]: E0128 00:25:46.363032 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:25:46.363307 kubelet[3705]: E0128 00:25:46.363128 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:25:46.363307 kubelet[3705]: E0128 00:25:46.363284 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:25:48.363156 kubelet[3705]: E0128 00:25:48.363093 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:25:50.568565 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:25:50.568715 kernel: audit: type=1130 audit(1769559950.563:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.33:22-10.200.16.10:34632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:50.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.33:22-10.200.16.10:34632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:50.564139 systemd[1]: Started sshd@15-10.200.20.33:22-10.200.16.10:34632.service - OpenSSH per-connection server daemon (10.200.16.10:34632). Jan 28 00:25:51.002715 sshd[6234]: Accepted publickey for core from 10.200.16.10 port 34632 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:51.001000 audit[6234]: USER_ACCT pid=6234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.018752 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:51.016000 audit[6234]: CRED_ACQ pid=6234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.036433 kernel: audit: type=1101 audit(1769559951.001:841): pid=6234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.036500 kernel: audit: type=1103 audit(1769559951.016:842): pid=6234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.042879 systemd-logind[2162]: New session 19 of user core. Jan 28 00:25:51.045836 kernel: audit: type=1006 audit(1769559951.016:843): pid=6234 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 28 00:25:51.016000 audit[6234]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc984310 a2=3 a3=0 items=0 ppid=1 pid=6234 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:51.061924 kernel: audit: type=1300 audit(1769559951.016:843): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc984310 a2=3 a3=0 items=0 ppid=1 pid=6234 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:51.016000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:51.062990 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 00:25:51.069464 kernel: audit: type=1327 audit(1769559951.016:843): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:51.072000 audit[6234]: USER_START pid=6234 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.072000 audit[6238]: CRED_ACQ pid=6238 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.112037 kernel: audit: type=1105 audit(1769559951.072:844): pid=6234 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.112101 kernel: audit: type=1103 audit(1769559951.072:845): pid=6238 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.290873 sshd[6238]: Connection closed by 10.200.16.10 port 34632 Jan 28 00:25:51.290358 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:51.291000 audit[6234]: USER_END pid=6234 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.294772 systemd-logind[2162]: Session 19 logged out. Waiting for processes to exit. Jan 28 00:25:51.296010 systemd[1]: sshd@15-10.200.20.33:22-10.200.16.10:34632.service: Deactivated successfully. Jan 28 00:25:51.299480 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 00:25:51.303446 systemd-logind[2162]: Removed session 19. Jan 28 00:25:51.291000 audit[6234]: CRED_DISP pid=6234 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.330249 kernel: audit: type=1106 audit(1769559951.291:846): pid=6234 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.330318 kernel: audit: type=1104 audit(1769559951.291:847): pid=6234 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.33:22-10.200.16.10:34632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:51.367506 systemd[1]: Started sshd@16-10.200.20.33:22-10.200.16.10:34638.service - OpenSSH per-connection server daemon (10.200.16.10:34638). Jan 28 00:25:51.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.33:22-10.200.16.10:34638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:51.763000 audit[6251]: USER_ACCT pid=6251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.764691 sshd[6251]: Accepted publickey for core from 10.200.16.10 port 34638 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:51.764000 audit[6251]: CRED_ACQ pid=6251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.764000 audit[6251]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeca8aa30 a2=3 a3=0 items=0 ppid=1 pid=6251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:51.764000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:51.766352 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:51.770207 systemd-logind[2162]: New session 20 of user core. Jan 28 00:25:51.778955 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 00:25:51.780000 audit[6251]: USER_START pid=6251 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:51.781000 audit[6255]: CRED_ACQ pid=6255 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:52.118391 sshd[6255]: Connection closed by 10.200.16.10 port 34638 Jan 28 00:25:52.119032 sshd-session[6251]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:52.120000 audit[6251]: USER_END pid=6251 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:52.120000 audit[6251]: CRED_DISP pid=6251 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:52.123170 systemd[1]: sshd@16-10.200.20.33:22-10.200.16.10:34638.service: Deactivated successfully. Jan 28 00:25:52.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.33:22-10.200.16.10:34638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:52.126263 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 00:25:52.127906 systemd-logind[2162]: Session 20 logged out. Waiting for processes to exit. Jan 28 00:25:52.128901 systemd-logind[2162]: Removed session 20. Jan 28 00:25:52.214252 systemd[1]: Started sshd@17-10.200.20.33:22-10.200.16.10:34642.service - OpenSSH per-connection server daemon (10.200.16.10:34642). Jan 28 00:25:52.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.33:22-10.200.16.10:34642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:52.363859 kubelet[3705]: E0128 00:25:52.363500 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:25:52.629000 audit[6265]: USER_ACCT pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:52.631501 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 34642 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:52.631000 audit[6265]: CRED_ACQ pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:52.631000 audit[6265]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe22bd720 a2=3 a3=0 items=0 ppid=1 pid=6265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:52.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:52.632468 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:52.635888 systemd-logind[2162]: New session 21 of user core. Jan 28 00:25:52.642965 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 00:25:52.644000 audit[6265]: USER_START pid=6265 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:52.646000 audit[6269]: CRED_ACQ pid=6269 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:53.330000 audit[6286]: NETFILTER_CFG table=filter:154 family=2 entries=26 op=nft_register_rule pid=6286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:25:53.330000 audit[6286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd0d00ff0 a2=0 a3=1 items=0 ppid=3806 pid=6286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:53.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:25:53.336000 audit[6286]: NETFILTER_CFG table=nat:155 family=2 entries=20 op=nft_register_rule pid=6286 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:25:53.336000 audit[6286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd0d00ff0 a2=0 a3=1 items=0 ppid=3806 pid=6286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:53.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:25:53.352000 audit[6288]: NETFILTER_CFG table=filter:156 family=2 entries=38 op=nft_register_rule pid=6288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:25:53.352000 audit[6288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc5bd2710 a2=0 a3=1 items=0 ppid=3806 pid=6288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:53.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:25:53.357000 audit[6288]: NETFILTER_CFG table=nat:157 family=2 entries=20 op=nft_register_rule pid=6288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:25:53.357000 audit[6288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc5bd2710 a2=0 a3=1 items=0 ppid=3806 pid=6288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:53.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:25:53.427590 sshd[6269]: Connection closed by 10.200.16.10 port 34642 Jan 28 00:25:53.426938 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:53.427000 audit[6265]: USER_END pid=6265 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:53.427000 audit[6265]: CRED_DISP pid=6265 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:53.431005 systemd[1]: sshd@17-10.200.20.33:22-10.200.16.10:34642.service: Deactivated successfully. Jan 28 00:25:53.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.33:22-10.200.16.10:34642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:53.432661 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 00:25:53.433458 systemd-logind[2162]: Session 21 logged out. Waiting for processes to exit. Jan 28 00:25:53.434512 systemd-logind[2162]: Removed session 21. Jan 28 00:25:53.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.33:22-10.200.16.10:34644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:53.503097 systemd[1]: Started sshd@18-10.200.20.33:22-10.200.16.10:34644.service - OpenSSH per-connection server daemon (10.200.16.10:34644). Jan 28 00:25:53.901000 audit[6293]: USER_ACCT pid=6293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:53.902105 sshd[6293]: Accepted publickey for core from 10.200.16.10 port 34644 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:53.901000 audit[6293]: CRED_ACQ pid=6293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:53.901000 audit[6293]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd1cb9330 a2=3 a3=0 items=0 ppid=1 pid=6293 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:53.901000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:53.903520 sshd-session[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:53.908325 systemd-logind[2162]: New session 22 of user core. Jan 28 00:25:53.910947 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 00:25:53.912000 audit[6293]: USER_START pid=6293 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:53.914000 audit[6297]: CRED_ACQ pid=6297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:54.254557 sshd[6297]: Connection closed by 10.200.16.10 port 34644 Jan 28 00:25:54.254770 sshd-session[6293]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:54.255000 audit[6293]: USER_END pid=6293 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:54.255000 audit[6293]: CRED_DISP pid=6293 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:54.259003 systemd[1]: sshd@18-10.200.20.33:22-10.200.16.10:34644.service: Deactivated successfully. Jan 28 00:25:54.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.33:22-10.200.16.10:34644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:54.260593 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 00:25:54.263411 systemd-logind[2162]: Session 22 logged out. Waiting for processes to exit. Jan 28 00:25:54.266009 systemd-logind[2162]: Removed session 22. Jan 28 00:25:54.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.33:22-10.200.16.10:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:54.348115 systemd[1]: Started sshd@19-10.200.20.33:22-10.200.16.10:34654.service - OpenSSH per-connection server daemon (10.200.16.10:34654). Jan 28 00:25:54.777000 audit[6307]: USER_ACCT pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:54.778811 sshd[6307]: Accepted publickey for core from 10.200.16.10 port 34654 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:25:54.778000 audit[6307]: CRED_ACQ pid=6307 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:54.778000 audit[6307]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6a1bcf0 a2=3 a3=0 items=0 ppid=1 pid=6307 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:54.778000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:25:54.780309 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:25:54.784132 systemd-logind[2162]: New session 23 of user core. Jan 28 00:25:54.788967 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 00:25:54.790000 audit[6307]: USER_START pid=6307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:54.791000 audit[6334]: CRED_ACQ pid=6334 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:55.059258 sshd[6334]: Connection closed by 10.200.16.10 port 34654 Jan 28 00:25:55.059900 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Jan 28 00:25:55.061000 audit[6307]: USER_END pid=6307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:55.061000 audit[6307]: CRED_DISP pid=6307 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:25:55.064252 systemd[1]: sshd@19-10.200.20.33:22-10.200.16.10:34654.service: Deactivated successfully. Jan 28 00:25:55.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.33:22-10.200.16.10:34654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:25:55.065958 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 00:25:55.067287 systemd-logind[2162]: Session 23 logged out. Waiting for processes to exit. Jan 28 00:25:55.068079 systemd-logind[2162]: Removed session 23. Jan 28 00:25:57.363215 kubelet[3705]: E0128 00:25:57.363169 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:25:57.363618 kubelet[3705]: E0128 00:25:57.363250 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:25:57.364499 kubelet[3705]: E0128 00:25:57.364228 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:25:57.364499 kubelet[3705]: E0128 00:25:57.364319 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:25:58.363374 containerd[2198]: time="2026-01-28T00:25:58.363332243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:25:58.640000 audit[6348]: NETFILTER_CFG table=filter:158 family=2 entries=26 op=nft_register_rule pid=6348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:25:58.645882 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 28 00:25:58.645953 kernel: audit: type=1325 audit(1769559958.640:889): table=filter:158 family=2 entries=26 op=nft_register_rule pid=6348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:25:58.640000 audit[6348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9802830 a2=0 a3=1 items=0 ppid=3806 pid=6348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:58.671575 kernel: audit: type=1300 audit(1769559958.640:889): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9802830 a2=0 a3=1 items=0 ppid=3806 pid=6348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:58.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:25:58.673726 containerd[2198]: time="2026-01-28T00:25:58.673581028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:25:58.680797 kernel: audit: type=1327 audit(1769559958.640:889): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:25:58.655000 audit[6348]: NETFILTER_CFG table=nat:159 family=2 entries=104 op=nft_register_chain pid=6348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:25:58.689805 kernel: audit: type=1325 audit(1769559958.655:890): table=nat:159 family=2 entries=104 op=nft_register_chain pid=6348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:25:58.655000 audit[6348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc9802830 a2=0 a3=1 items=0 ppid=3806 pid=6348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:58.691267 containerd[2198]: time="2026-01-28T00:25:58.690525837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:25:58.691267 containerd[2198]: time="2026-01-28T00:25:58.690701409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:25:58.691463 kubelet[3705]: E0128 00:25:58.690804 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:25:58.691463 kubelet[3705]: E0128 00:25:58.690884 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:25:58.691463 kubelet[3705]: E0128 00:25:58.690980 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z95d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65cb8f7567-dv7tt_calico-apiserver(f8bf6ab5-ad5c-41b7-962e-92c73fabe079): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:25:58.692531 kubelet[3705]: E0128 00:25:58.692456 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:25:58.706986 kernel: audit: type=1300 audit(1769559958.655:890): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc9802830 a2=0 a3=1 items=0 ppid=3806 pid=6348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:25:58.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:25:58.719835 kernel: audit: type=1327 audit(1769559958.655:890): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:26:00.148039 systemd[1]: Started sshd@20-10.200.20.33:22-10.200.16.10:60360.service - OpenSSH per-connection server daemon (10.200.16.10:60360). Jan 28 00:26:00.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.33:22-10.200.16.10:60360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:00.162915 kernel: audit: type=1130 audit(1769559960.146:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.33:22-10.200.16.10:60360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:00.576847 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 60360 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:26:00.574000 audit[6350]: USER_ACCT pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:00.595850 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:26:00.593000 audit[6350]: CRED_ACQ pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:00.614582 kernel: audit: type=1101 audit(1769559960.574:892): pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:00.614646 kernel: audit: type=1103 audit(1769559960.593:893): pid=6350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:00.622775 systemd-logind[2162]: New session 24 of user core. Jan 28 00:26:00.631441 kernel: audit: type=1006 audit(1769559960.593:894): pid=6350 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 28 00:26:00.593000 audit[6350]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffffb95610 a2=3 a3=0 items=0 ppid=1 pid=6350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:00.593000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:00.635013 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 00:26:00.635000 audit[6350]: USER_START pid=6350 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:00.636000 audit[6354]: CRED_ACQ pid=6354 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:00.854987 sshd[6354]: Connection closed by 10.200.16.10 port 60360 Jan 28 00:26:00.855479 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Jan 28 00:26:00.856000 audit[6350]: USER_END pid=6350 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:00.857000 audit[6350]: CRED_DISP pid=6350 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:00.861674 systemd[1]: sshd@20-10.200.20.33:22-10.200.16.10:60360.service: Deactivated successfully. Jan 28 00:26:00.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.33:22-10.200.16.10:60360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:00.863390 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 00:26:00.864421 systemd-logind[2162]: Session 24 logged out. Waiting for processes to exit. Jan 28 00:26:00.867267 systemd-logind[2162]: Removed session 24. Jan 28 00:26:03.362845 containerd[2198]: time="2026-01-28T00:26:03.362676154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:26:03.611642 containerd[2198]: time="2026-01-28T00:26:03.611603470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:26:03.614907 containerd[2198]: time="2026-01-28T00:26:03.614831949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:26:03.614965 containerd[2198]: time="2026-01-28T00:26:03.614895215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 00:26:03.615126 kubelet[3705]: E0128 00:26:03.615066 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:26:03.615715 kubelet[3705]: E0128 00:26:03.615371 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:26:03.615715 kubelet[3705]: E0128 00:26:03.615463 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5b63b438c56a4f4382ff93bbd04b95ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sl6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65d59647fb-65s4b_calico-system(e2c157ae-30f5-408a-abb4-61e3e5e3c10f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:26:03.617248 containerd[2198]: time="2026-01-28T00:26:03.617200349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:26:03.842226 containerd[2198]: time="2026-01-28T00:26:03.842066806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:26:03.845587 containerd[2198]: time="2026-01-28T00:26:03.845550885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:26:03.845789 containerd[2198]: time="2026-01-28T00:26:03.845689080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 00:26:03.845925 kubelet[3705]: E0128 00:26:03.845888 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:26:03.845970 kubelet[3705]: E0128 00:26:03.845931 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:26:03.846045 kubelet[3705]: E0128 00:26:03.846016 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl6bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65d59647fb-65s4b_calico-system(e2c157ae-30f5-408a-abb4-61e3e5e3c10f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:26:03.847944 kubelet[3705]: E0128 00:26:03.847867 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:26:05.362022 containerd[2198]: time="2026-01-28T00:26:05.361986525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:26:05.617157 containerd[2198]: time="2026-01-28T00:26:05.616915076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:26:05.620449 containerd[2198]: time="2026-01-28T00:26:05.620361394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:26:05.620528 containerd[2198]: time="2026-01-28T00:26:05.620421716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 00:26:05.620620 kubelet[3705]: E0128 00:26:05.620570 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:26:05.620888 kubelet[3705]: E0128 00:26:05.620626 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:26:05.620888 kubelet[3705]: E0128 00:26:05.620726 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6lkvb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-758f45684b-zdqfk_calico-system(11c0eb0b-ad29-4c1b-b01f-f65a107c6011): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:26:05.621955 kubelet[3705]: E0128 00:26:05.621922 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:26:05.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.33:22-10.200.16.10:60374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:05.945090 systemd[1]: Started sshd@21-10.200.20.33:22-10.200.16.10:60374.service - OpenSSH per-connection server daemon (10.200.16.10:60374). Jan 28 00:26:05.948224 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 00:26:05.948291 kernel: audit: type=1130 audit(1769559965.944:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.33:22-10.200.16.10:60374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:06.378000 audit[6388]: USER_ACCT pid=6388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.379727 sshd[6388]: Accepted publickey for core from 10.200.16.10 port 60374 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:26:06.395000 audit[6388]: CRED_ACQ pid=6388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.396790 sshd-session[6388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:26:06.409984 kernel: audit: type=1101 audit(1769559966.378:901): pid=6388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.410045 kernel: audit: type=1103 audit(1769559966.395:902): pid=6388 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.420021 kernel: audit: type=1006 audit(1769559966.395:903): pid=6388 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 28 00:26:06.395000 audit[6388]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc563a010 a2=3 a3=0 items=0 ppid=1 pid=6388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:06.437743 kernel: audit: type=1300 audit(1769559966.395:903): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc563a010 a2=3 a3=0 items=0 ppid=1 pid=6388 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:06.395000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:06.444882 kernel: audit: type=1327 audit(1769559966.395:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:06.447028 systemd-logind[2162]: New session 25 of user core. Jan 28 00:26:06.451977 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 00:26:06.453000 audit[6388]: USER_START pid=6388 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.453000 audit[6392]: CRED_ACQ pid=6392 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.488127 kernel: audit: type=1105 audit(1769559966.453:904): pid=6388 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.488187 kernel: audit: type=1103 audit(1769559966.453:905): pid=6392 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.672777 sshd[6392]: Connection closed by 10.200.16.10 port 60374 Jan 28 00:26:06.672302 sshd-session[6388]: pam_unix(sshd:session): session closed for user core Jan 28 00:26:06.672000 audit[6388]: USER_END pid=6388 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.693656 systemd[1]: sshd@21-10.200.20.33:22-10.200.16.10:60374.service: Deactivated successfully. Jan 28 00:26:06.672000 audit[6388]: CRED_DISP pid=6388 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.709918 kernel: audit: type=1106 audit(1769559966.672:906): pid=6388 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.709985 kernel: audit: type=1104 audit(1769559966.672:907): pid=6388 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:06.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.33:22-10.200.16.10:60374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:06.698317 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 00:26:06.699944 systemd-logind[2162]: Session 25 logged out. Waiting for processes to exit. Jan 28 00:26:06.703450 systemd-logind[2162]: Removed session 25. Jan 28 00:26:09.362560 containerd[2198]: time="2026-01-28T00:26:09.362303104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:26:09.639765 containerd[2198]: time="2026-01-28T00:26:09.639555277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:26:09.642697 containerd[2198]: time="2026-01-28T00:26:09.642663497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:26:09.642757 containerd[2198]: time="2026-01-28T00:26:09.642734899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:26:09.642968 kubelet[3705]: E0128 00:26:09.642932 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:26:09.643238 kubelet[3705]: E0128 00:26:09.642977 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:26:09.643289 containerd[2198]: time="2026-01-28T00:26:09.643263385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:26:09.643652 kubelet[3705]: E0128 00:26:09.643612 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5v98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db66f5c9f-n9l6g_calico-apiserver(d62fc2fd-8ccc-48de-b10b-98a0aa5672ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:26:09.644980 kubelet[3705]: E0128 00:26:09.644930 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:26:09.923742 containerd[2198]: time="2026-01-28T00:26:09.923414269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:26:09.926989 containerd[2198]: time="2026-01-28T00:26:09.926958549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:26:09.927060 containerd[2198]: time="2026-01-28T00:26:09.927028543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:26:09.927222 kubelet[3705]: E0128 00:26:09.927184 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:26:09.927276 kubelet[3705]: E0128 00:26:09.927228 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:26:09.927354 kubelet[3705]: E0128 00:26:09.927323 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcvr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6db66f5c9f-tsvdr_calico-apiserver(d774fe09-bd7c-498b-91a3-e6c2f720c9c3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:26:09.928948 kubelet[3705]: E0128 00:26:09.928919 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:26:10.363137 containerd[2198]: time="2026-01-28T00:26:10.363081771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:26:10.611203 containerd[2198]: time="2026-01-28T00:26:10.610977771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:26:10.615098 containerd[2198]: time="2026-01-28T00:26:10.614664887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:26:10.615098 containerd[2198]: time="2026-01-28T00:26:10.614757218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 00:26:10.615776 kubelet[3705]: E0128 00:26:10.615587 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:26:10.616180 kubelet[3705]: E0128 00:26:10.615978 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:26:10.616180 kubelet[3705]: E0128 00:26:10.616106 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhxzf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gxssn_calico-system(37d401e3-39ef-4596-8144-de1aba842d50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:26:10.617571 kubelet[3705]: E0128 00:26:10.617545 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:26:11.362507 containerd[2198]: time="2026-01-28T00:26:11.362474409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:26:11.615054 containerd[2198]: time="2026-01-28T00:26:11.614955926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:26:11.618966 containerd[2198]: time="2026-01-28T00:26:11.618927546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:26:11.619051 containerd[2198]: time="2026-01-28T00:26:11.618998052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 00:26:11.619185 kubelet[3705]: E0128 00:26:11.619157 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:26:11.619682 kubelet[3705]: E0128 00:26:11.619414 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:26:11.619682 kubelet[3705]: E0128 00:26:11.619511 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk8cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:26:11.621462 containerd[2198]: time="2026-01-28T00:26:11.621437038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:26:11.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.33:22-10.200.16.10:38080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:11.762061 systemd[1]: Started sshd@22-10.200.20.33:22-10.200.16.10:38080.service - OpenSSH per-connection server daemon (10.200.16.10:38080). Jan 28 00:26:11.765597 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:26:11.765755 kernel: audit: type=1130 audit(1769559971.761:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.33:22-10.200.16.10:38080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:11.871687 containerd[2198]: time="2026-01-28T00:26:11.871476205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:26:11.874803 containerd[2198]: time="2026-01-28T00:26:11.874680882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 00:26:11.874803 containerd[2198]: time="2026-01-28T00:26:11.874679194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:26:11.875017 kubelet[3705]: E0128 00:26:11.874988 3705 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:26:11.875117 kubelet[3705]: E0128 00:26:11.875100 3705 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:26:11.875300 kubelet[3705]: E0128 00:26:11.875262 3705 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pk8cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w2sfv_calico-system(f6d08a70-95be-4168-8a2f-3e965a6278e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:26:11.876559 kubelet[3705]: E0128 00:26:11.876525 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:26:12.209416 sshd[6406]: Accepted publickey for core from 10.200.16.10 port 38080 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:26:12.210404 sshd-session[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:26:12.208000 audit[6406]: USER_ACCT pid=6406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.208000 audit[6406]: CRED_ACQ pid=6406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.240685 kernel: audit: type=1101 audit(1769559972.208:910): pid=6406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.240761 kernel: audit: type=1103 audit(1769559972.208:911): pid=6406 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.250257 kernel: audit: type=1006 audit(1769559972.209:912): pid=6406 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 28 00:26:12.209000 audit[6406]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd203fc80 a2=3 a3=0 items=0 ppid=1 pid=6406 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:12.254167 systemd-logind[2162]: New session 26 of user core. Jan 28 00:26:12.267410 kernel: audit: type=1300 audit(1769559972.209:912): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd203fc80 a2=3 a3=0 items=0 ppid=1 pid=6406 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:12.209000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:12.274139 kernel: audit: type=1327 audit(1769559972.209:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:12.275991 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 00:26:12.279000 audit[6406]: USER_START pid=6406 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.299000 audit[6410]: CRED_ACQ pid=6410 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.313927 kernel: audit: type=1105 audit(1769559972.279:913): pid=6406 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.313991 kernel: audit: type=1103 audit(1769559972.299:914): pid=6410 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.518835 sshd[6410]: Connection closed by 10.200.16.10 port 38080 Jan 28 00:26:12.521452 sshd-session[6406]: pam_unix(sshd:session): session closed for user core Jan 28 00:26:12.522000 audit[6406]: USER_END pid=6406 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.525613 systemd-logind[2162]: Session 26 logged out. Waiting for processes to exit. Jan 28 00:26:12.526965 systemd[1]: sshd@22-10.200.20.33:22-10.200.16.10:38080.service: Deactivated successfully. Jan 28 00:26:12.531073 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 00:26:12.532858 systemd-logind[2162]: Removed session 26. Jan 28 00:26:12.522000 audit[6406]: CRED_DISP pid=6406 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.546467 kernel: audit: type=1106 audit(1769559972.522:915): pid=6406 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.546556 kernel: audit: type=1104 audit(1769559972.522:916): pid=6406 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:12.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.33:22-10.200.16.10:38080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:13.361577 kubelet[3705]: E0128 00:26:13.361379 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:26:17.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.33:22-10.200.16.10:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:17.600948 systemd[1]: Started sshd@23-10.200.20.33:22-10.200.16.10:38086.service - OpenSSH per-connection server daemon (10.200.16.10:38086). Jan 28 00:26:17.604011 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:26:17.604070 kernel: audit: type=1130 audit(1769559977.600:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.33:22-10.200.16.10:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:18.007000 audit[6421]: USER_ACCT pid=6421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.008662 sshd[6421]: Accepted publickey for core from 10.200.16.10 port 38086 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:26:18.024569 sshd-session[6421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:26:18.022000 audit[6421]: CRED_ACQ pid=6421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.039336 kernel: audit: type=1101 audit(1769559978.007:919): pid=6421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.039379 kernel: audit: type=1103 audit(1769559978.022:920): pid=6421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.043943 systemd-logind[2162]: New session 27 of user core. Jan 28 00:26:18.048925 kernel: audit: type=1006 audit(1769559978.022:921): pid=6421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 28 00:26:18.022000 audit[6421]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed005a70 a2=3 a3=0 items=0 ppid=1 pid=6421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:18.066280 kernel: audit: type=1300 audit(1769559978.022:921): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffed005a70 a2=3 a3=0 items=0 ppid=1 pid=6421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:18.022000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:18.074275 kernel: audit: type=1327 audit(1769559978.022:921): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:18.074979 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 00:26:18.078000 audit[6421]: USER_START pid=6421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.098000 audit[6425]: CRED_ACQ pid=6425 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.112843 kernel: audit: type=1105 audit(1769559978.078:922): pid=6421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.112912 kernel: audit: type=1103 audit(1769559978.098:923): pid=6425 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.290334 sshd[6425]: Connection closed by 10.200.16.10 port 38086 Jan 28 00:26:18.292231 sshd-session[6421]: pam_unix(sshd:session): session closed for user core Jan 28 00:26:18.293000 audit[6421]: USER_END pid=6421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.296679 systemd-logind[2162]: Session 27 logged out. Waiting for processes to exit. Jan 28 00:26:18.297256 systemd[1]: sshd@23-10.200.20.33:22-10.200.16.10:38086.service: Deactivated successfully. Jan 28 00:26:18.301320 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 00:26:18.306338 systemd-logind[2162]: Removed session 27. Jan 28 00:26:18.293000 audit[6421]: CRED_DISP pid=6421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.329068 kernel: audit: type=1106 audit(1769559978.293:924): pid=6421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.329133 kernel: audit: type=1104 audit(1769559978.293:925): pid=6421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:18.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.33:22-10.200.16.10:38086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:18.364021 kubelet[3705]: E0128 00:26:18.363897 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-758f45684b-zdqfk" podUID="11c0eb0b-ad29-4c1b-b01f-f65a107c6011" Jan 28 00:26:18.366240 kubelet[3705]: E0128 00:26:18.366180 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65d59647fb-65s4b" podUID="e2c157ae-30f5-408a-abb4-61e3e5e3c10f" Jan 28 00:26:21.362981 kubelet[3705]: E0128 00:26:21.362939 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-n9l6g" podUID="d62fc2fd-8ccc-48de-b10b-98a0aa5672ea" Jan 28 00:26:21.363904 kubelet[3705]: E0128 00:26:21.363877 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6db66f5c9f-tsvdr" podUID="d774fe09-bd7c-498b-91a3-e6c2f720c9c3" Jan 28 00:26:23.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.33:22-10.200.16.10:41976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:23.375947 systemd[1]: Started sshd@24-10.200.20.33:22-10.200.16.10:41976.service - OpenSSH per-connection server daemon (10.200.16.10:41976). Jan 28 00:26:23.378799 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:26:23.378866 kernel: audit: type=1130 audit(1769559983.375:927): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.33:22-10.200.16.10:41976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:23.787000 audit[6437]: USER_ACCT pid=6437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:23.804182 sshd[6437]: Accepted publickey for core from 10.200.16.10 port 41976 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:26:23.804429 sshd-session[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:26:23.802000 audit[6437]: CRED_ACQ pid=6437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:23.819898 kernel: audit: type=1101 audit(1769559983.787:928): pid=6437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:23.819962 kernel: audit: type=1103 audit(1769559983.802:929): pid=6437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:23.814203 systemd-logind[2162]: New session 28 of user core. Jan 28 00:26:23.828165 kernel: audit: type=1006 audit(1769559983.802:930): pid=6437 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 28 00:26:23.802000 audit[6437]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe66ec3b0 a2=3 a3=0 items=0 ppid=1 pid=6437 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:23.829978 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 00:26:23.844611 kernel: audit: type=1300 audit(1769559983.802:930): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe66ec3b0 a2=3 a3=0 items=0 ppid=1 pid=6437 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:23.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:23.851864 kernel: audit: type=1327 audit(1769559983.802:930): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:23.851000 audit[6437]: USER_START pid=6437 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:23.870838 kernel: audit: type=1105 audit(1769559983.851:931): pid=6437 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:23.851000 audit[6441]: CRED_ACQ pid=6441 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:23.884648 kernel: audit: type=1103 audit(1769559983.851:932): pid=6441 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:23.898847 update_engine[2166]: I20260128 00:26:23.897884 2166 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 00:26:23.898847 update_engine[2166]: I20260128 00:26:23.897927 2166 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 00:26:23.898847 update_engine[2166]: I20260128 00:26:23.898087 2166 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 00:26:23.899360 update_engine[2166]: I20260128 00:26:23.899338 2166 omaha_request_params.cc:62] Current group set to beta Jan 28 00:26:23.900248 update_engine[2166]: I20260128 00:26:23.899951 2166 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 00:26:23.900248 update_engine[2166]: I20260128 00:26:23.899969 2166 update_attempter.cc:643] Scheduling an action processor start. Jan 28 00:26:23.900248 update_engine[2166]: I20260128 00:26:23.899982 2166 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 00:26:23.903187 update_engine[2166]: I20260128 00:26:23.903162 2166 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 00:26:23.903313 update_engine[2166]: I20260128 00:26:23.903296 2166 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 00:26:23.903386 update_engine[2166]: I20260128 00:26:23.903362 2166 omaha_request_action.cc:272] Request: Jan 28 00:26:23.903386 update_engine[2166]: Jan 28 00:26:23.903386 update_engine[2166]: Jan 28 00:26:23.903386 update_engine[2166]: Jan 28 00:26:23.903386 update_engine[2166]: Jan 28 00:26:23.903386 update_engine[2166]: Jan 28 00:26:23.903386 update_engine[2166]: Jan 28 00:26:23.903386 update_engine[2166]: Jan 28 00:26:23.903386 update_engine[2166]: Jan 28 00:26:23.904220 update_engine[2166]: I20260128 00:26:23.903558 2166 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 00:26:23.904263 locksmithd[2240]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 00:26:23.905158 update_engine[2166]: I20260128 00:26:23.905136 2166 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 00:26:23.905796 update_engine[2166]: I20260128 00:26:23.905771 2166 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 00:26:23.918855 update_engine[2166]: E20260128 00:26:23.918807 2166 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 00:26:23.918962 update_engine[2166]: I20260128 00:26:23.918946 2166 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 00:26:24.063899 sshd[6441]: Connection closed by 10.200.16.10 port 41976 Jan 28 00:26:24.064462 sshd-session[6437]: pam_unix(sshd:session): session closed for user core Jan 28 00:26:24.066000 audit[6437]: USER_END pid=6437 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:24.070376 systemd-logind[2162]: Session 28 logged out. Waiting for processes to exit. Jan 28 00:26:24.070922 systemd[1]: sshd@24-10.200.20.33:22-10.200.16.10:41976.service: Deactivated successfully. Jan 28 00:26:24.078224 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 00:26:24.082031 systemd-logind[2162]: Removed session 28. Jan 28 00:26:24.066000 audit[6437]: CRED_DISP pid=6437 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:24.110369 kernel: audit: type=1106 audit(1769559984.066:933): pid=6437 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:24.110429 kernel: audit: type=1104 audit(1769559984.066:934): pid=6437 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:24.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.33:22-10.200.16.10:41976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:24.362784 kubelet[3705]: E0128 00:26:24.362735 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gxssn" podUID="37d401e3-39ef-4596-8144-de1aba842d50" Jan 28 00:26:26.365708 kubelet[3705]: E0128 00:26:26.365664 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65cb8f7567-dv7tt" podUID="f8bf6ab5-ad5c-41b7-962e-92c73fabe079" Jan 28 00:26:26.368866 kubelet[3705]: E0128 00:26:26.368829 3705 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w2sfv" podUID="f6d08a70-95be-4168-8a2f-3e965a6278e2" Jan 28 00:26:29.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.33:22-10.200.16.10:41980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:29.130669 systemd[1]: Started sshd@25-10.200.20.33:22-10.200.16.10:41980.service - OpenSSH per-connection server daemon (10.200.16.10:41980). Jan 28 00:26:29.134589 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:26:29.134651 kernel: audit: type=1130 audit(1769559989.130:936): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.33:22-10.200.16.10:41980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:26:29.507124 sshd[6477]: Accepted publickey for core from 10.200.16.10 port 41980 ssh2: RSA SHA256:KFxaFr+tegfi8NDQBXYnJ8/JCxzG9ZFiPPeeI8sIubA Jan 28 00:26:29.506000 audit[6477]: USER_ACCT pid=6477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.524337 sshd-session[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:26:29.522000 audit[6477]: CRED_ACQ pid=6477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.531661 systemd-logind[2162]: New session 29 of user core. Jan 28 00:26:29.544844 kernel: audit: type=1101 audit(1769559989.506:937): pid=6477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.544922 kernel: audit: type=1103 audit(1769559989.522:938): pid=6477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.553330 kernel: audit: type=1006 audit(1769559989.522:939): pid=6477 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 28 00:26:29.522000 audit[6477]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce44efc0 a2=3 a3=0 items=0 ppid=1 pid=6477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:29.569884 kernel: audit: type=1300 audit(1769559989.522:939): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce44efc0 a2=3 a3=0 items=0 ppid=1 pid=6477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:26:29.522000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:29.572526 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 00:26:29.577246 kernel: audit: type=1327 audit(1769559989.522:939): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:26:29.578000 audit[6477]: USER_START pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.599000 audit[6481]: CRED_ACQ pid=6481 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.613656 kernel: audit: type=1105 audit(1769559989.578:940): pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.613724 kernel: audit: type=1103 audit(1769559989.599:941): pid=6481 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.790910 sshd[6481]: Connection closed by 10.200.16.10 port 41980 Jan 28 00:26:29.791718 sshd-session[6477]: pam_unix(sshd:session): session closed for user core Jan 28 00:26:29.792000 audit[6477]: USER_END pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.796398 systemd-logind[2162]: Session 29 logged out. Waiting for processes to exit. Jan 28 00:26:29.798404 systemd[1]: sshd@25-10.200.20.33:22-10.200.16.10:41980.service: Deactivated successfully. Jan 28 00:26:29.803261 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 00:26:29.805344 systemd-logind[2162]: Removed session 29. Jan 28 00:26:29.792000 audit[6477]: CRED_DISP pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.827766 kernel: audit: type=1106 audit(1769559989.792:942): pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.827836 kernel: audit: type=1104 audit(1769559989.792:943): pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 00:26:29.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.33:22-10.200.16.10:41980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'