Dec 16 12:14:45.431125 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:14:45.431142 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 12:14:45.431148 kernel: KASLR enabled Dec 16 12:14:45.431152 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:14:45.431157 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:14:45.431161 kernel: efi: EFI v2.7 by EDK II Dec 16 12:14:45.431167 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:14:45.431171 kernel: random: crng init done Dec 16 12:14:45.431175 kernel: secureboot: Secure boot disabled Dec 16 12:14:45.431179 kernel: ACPI: Early table checksum verification disabled Dec 16 12:14:45.431183 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:14:45.431187 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431191 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431197 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:14:45.431202 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431206 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431211 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431216 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431221 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431225 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431230 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:14:45.431234 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:14:45.431238 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:14:45.431243 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:14:45.431247 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:14:45.431252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:14:45.431256 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:14:45.431261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:14:45.431266 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:14:45.431270 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:14:45.431274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:14:45.431279 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:14:45.431283 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:14:45.431288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:14:45.431292 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:14:45.431297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:14:45.431301 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:14:45.431305 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:14:45.431311 kernel: Zone ranges: Dec 16 12:14:45.431315 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:14:45.431322 kernel: DMA32 empty Dec 16 12:14:45.431326 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:14:45.431331 kernel: Device empty Dec 16 12:14:45.431336 kernel: Movable zone start for each node Dec 16 12:14:45.431341 kernel: Early memory node ranges Dec 16 12:14:45.431346 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:14:45.431350 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:14:45.431355 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:14:45.431360 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:14:45.431364 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:14:45.431369 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:14:45.431374 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:14:45.431379 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:14:45.431384 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:14:45.431389 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:14:45.431393 kernel: psci: probing for conduit method from ACPI. Dec 16 12:14:45.431398 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:14:45.431402 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:14:45.431407 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:14:45.431412 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:14:45.431416 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:14:45.431421 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:14:45.431426 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:14:45.431430 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:14:45.431436 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:14:45.431441 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:14:45.431462 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:14:45.431467 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:14:45.431472 kernel: CPU features: detected: Spectre-v4 Dec 16 12:14:45.431477 kernel: CPU features: detected: Spectre-BHB Dec 16 12:14:45.431481 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:14:45.431486 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:14:45.431491 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:14:45.431495 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:14:45.431501 kernel: alternatives: applying boot alternatives Dec 16 12:14:45.431507 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:14:45.431512 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:14:45.431517 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:14:45.431521 kernel: Fallback order for Node 0: 0 Dec 16 12:14:45.431526 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:14:45.431531 kernel: Policy zone: Normal Dec 16 12:14:45.431535 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:14:45.431540 kernel: software IO TLB: area num 2. Dec 16 12:14:45.431545 kernel: software IO TLB: mapped [mem 0x0000000037370000-0x000000003b370000] (64MB) Dec 16 12:14:45.431549 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:14:45.431555 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:14:45.431560 kernel: rcu: RCU event tracing is enabled. Dec 16 12:14:45.431565 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:14:45.431570 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:14:45.431575 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:14:45.431579 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:14:45.431584 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:14:45.431589 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:14:45.431594 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:14:45.431598 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:14:45.431603 kernel: GICv3: 960 SPIs implemented Dec 16 12:14:45.431608 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:14:45.431613 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:14:45.431618 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:14:45.431622 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:14:45.431627 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:14:45.431632 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:14:45.431637 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:14:45.431641 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:14:45.431646 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:14:45.431651 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:14:45.431655 kernel: Console: colour dummy device 80x25 Dec 16 12:14:45.431661 kernel: printk: legacy console [tty1] enabled Dec 16 12:14:45.431666 kernel: ACPI: Core revision 20240827 Dec 16 12:14:45.431671 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:14:45.431676 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:14:45.431681 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:14:45.431686 kernel: landlock: Up and running. Dec 16 12:14:45.431691 kernel: SELinux: Initializing. Dec 16 12:14:45.431697 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:14:45.431702 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:14:45.431707 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:14:45.431712 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:14:45.431719 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:14:45.431725 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:14:45.431730 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:14:45.431736 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:14:45.431741 kernel: Remapping and enabling EFI services. Dec 16 12:14:45.431746 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:14:45.431752 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:14:45.431757 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:14:45.431762 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:14:45.431768 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:14:45.431773 kernel: SMP: Total of 2 processors activated. Dec 16 12:14:45.431778 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:14:45.431784 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:14:45.431789 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:14:45.431794 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:14:45.431799 kernel: CPU features: detected: Common not Private translations Dec 16 12:14:45.431805 kernel: CPU features: detected: CRC32 instructions Dec 16 12:14:45.431811 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:14:45.431816 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:14:45.431821 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:14:45.431826 kernel: CPU features: detected: Privileged Access Never Dec 16 12:14:45.431831 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:14:45.431836 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:14:45.431842 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:14:45.431847 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:14:45.431853 kernel: alternatives: applying system-wide alternatives Dec 16 12:14:45.431858 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:14:45.431863 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:14:45.431868 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:14:45.431874 kernel: Memory: 3979900K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 193072K reserved, 16384K cma-reserved) Dec 16 12:14:45.431880 kernel: devtmpfs: initialized Dec 16 12:14:45.431885 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:14:45.431890 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:14:45.431896 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:14:45.431901 kernel: 0 pages in range for non-PLT usage Dec 16 12:14:45.431906 kernel: 515168 pages in range for PLT usage Dec 16 12:14:45.431911 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:14:45.431916 kernel: SMBIOS 3.1.0 present. Dec 16 12:14:45.431922 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:14:45.431927 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:14:45.431932 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:14:45.431937 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:14:45.431943 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:14:45.431948 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:14:45.431953 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:14:45.431959 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 12:14:45.431964 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:14:45.431969 kernel: cpuidle: using governor menu Dec 16 12:14:45.431974 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:14:45.431980 kernel: ASID allocator initialised with 32768 entries Dec 16 12:14:45.431985 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:14:45.431990 kernel: Serial: AMBA PL011 UART driver Dec 16 12:14:45.431996 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:14:45.432001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:14:45.432006 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:14:45.432011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:14:45.432016 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:14:45.432021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:14:45.432026 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:14:45.432032 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:14:45.432037 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:14:45.432042 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:14:45.432047 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:14:45.432053 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:14:45.432058 kernel: ACPI: Interpreter enabled Dec 16 12:14:45.432063 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:14:45.432069 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:14:45.432074 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:14:45.432079 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:14:45.432084 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:14:45.432089 kernel: ACPI: CPU0 has been hot-added Dec 16 12:14:45.432094 kernel: ACPI: CPU1 has been hot-added Dec 16 12:14:45.432099 kernel: iommu: Default domain type: Translated Dec 16 12:14:45.432105 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:14:45.432111 kernel: efivars: Registered efivars operations Dec 16 12:14:45.432116 kernel: vgaarb: loaded Dec 16 12:14:45.432121 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:14:45.432126 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:14:45.432131 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:14:45.432136 kernel: pnp: PnP ACPI init Dec 16 12:14:45.432141 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:14:45.432147 kernel: NET: Registered PF_INET protocol family Dec 16 12:14:45.432152 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:14:45.432157 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:14:45.432163 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:14:45.432168 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:14:45.432173 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:14:45.432178 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:14:45.432184 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:14:45.432189 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:14:45.432195 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:14:45.432200 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:14:45.432205 kernel: kvm [1]: HYP mode not available Dec 16 12:14:45.432210 kernel: Initialise system trusted keyrings Dec 16 12:14:45.432215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:14:45.432221 kernel: Key type asymmetric registered Dec 16 12:14:45.432226 kernel: Asymmetric key parser 'x509' registered Dec 16 12:14:45.432231 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:14:45.432237 kernel: io scheduler mq-deadline registered Dec 16 12:14:45.432242 kernel: io scheduler kyber registered Dec 16 12:14:45.432247 kernel: io scheduler bfq registered Dec 16 12:14:45.432252 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:14:45.432258 kernel: thunder_xcv, ver 1.0 Dec 16 12:14:45.432263 kernel: thunder_bgx, ver 1.0 Dec 16 12:14:45.432268 kernel: nicpf, ver 1.0 Dec 16 12:14:45.432273 kernel: nicvf, ver 1.0 Dec 16 12:14:45.432393 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:14:45.432511 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:14:42 UTC (1765887282) Dec 16 12:14:45.432522 kernel: efifb: probing for efifb Dec 16 12:14:45.432528 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:14:45.432533 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:14:45.432538 kernel: efifb: scrolling: redraw Dec 16 12:14:45.432543 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:14:45.432548 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:14:45.432554 kernel: fb0: EFI VGA frame buffer device Dec 16 12:14:45.432560 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:14:45.432565 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:14:45.432570 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:14:45.432576 kernel: watchdog: NMI not fully supported Dec 16 12:14:45.432581 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:14:45.432586 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:14:45.432591 kernel: Segment Routing with IPv6 Dec 16 12:14:45.432597 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:14:45.432603 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:14:45.432608 kernel: Key type dns_resolver registered Dec 16 12:14:45.432613 kernel: registered taskstats version 1 Dec 16 12:14:45.432618 kernel: Loading compiled-in X.509 certificates Dec 16 12:14:45.432624 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 12:14:45.432629 kernel: Demotion targets for Node 0: null Dec 16 12:14:45.432634 kernel: Key type .fscrypt registered Dec 16 12:14:45.432640 kernel: Key type fscrypt-provisioning registered Dec 16 12:14:45.432645 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:14:45.432650 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:14:45.432656 kernel: ima: No architecture policies found Dec 16 12:14:45.432661 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:14:45.432666 kernel: clk: Disabling unused clocks Dec 16 12:14:45.432671 kernel: PM: genpd: Disabling unused power domains Dec 16 12:14:45.432677 kernel: Freeing unused kernel memory: 12480K Dec 16 12:14:45.432682 kernel: Run /init as init process Dec 16 12:14:45.432687 kernel: with arguments: Dec 16 12:14:45.432692 kernel: /init Dec 16 12:14:45.432697 kernel: with environment: Dec 16 12:14:45.432702 kernel: HOME=/ Dec 16 12:14:45.432708 kernel: TERM=linux Dec 16 12:14:45.432713 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:14:45.432719 kernel: SCSI subsystem initialized Dec 16 12:14:45.432724 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:14:45.432729 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:14:45.432816 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:14:45.432824 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:14:45.432830 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:14:45.432836 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:14:45.432841 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:14:45.432846 kernel: PTP clock support registered Dec 16 12:14:45.432852 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:14:45.432857 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:14:45.432862 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:14:45.432869 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:14:45.432874 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:14:45.432879 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:14:45.432970 kernel: scsi host0: storvsc_host_t Dec 16 12:14:45.433048 kernel: scsi host1: storvsc_host_t Dec 16 12:14:45.433136 kernel: scsi 1:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:14:45.433220 kernel: scsi 1:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:14:45.433292 kernel: sd 1:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:14:45.433365 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:14:45.433437 kernel: sd 1:0:0:0: [sda] Write Protect is off Dec 16 12:14:45.435580 kernel: sd 1:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:14:45.435672 kernel: sd 1:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:14:45.435762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#309 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:14:45.435832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#256 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:14:45.435840 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:14:45.435913 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Dec 16 12:14:45.435988 kernel: sr 1:0:0:2: [sr0] scsi-1 drive Dec 16 12:14:45.435997 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:14:45.436068 kernel: sr 1:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:14:45.436075 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:14:45.436080 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:14:45.436086 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:14:45.436091 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:14:45.436096 kernel: raid6: neonx8 gen() 18531 MB/s Dec 16 12:14:45.436103 kernel: raid6: neonx4 gen() 18581 MB/s Dec 16 12:14:45.436108 kernel: raid6: neonx2 gen() 17113 MB/s Dec 16 12:14:45.436113 kernel: raid6: neonx1 gen() 15101 MB/s Dec 16 12:14:45.436118 kernel: raid6: int64x8 gen() 10561 MB/s Dec 16 12:14:45.436123 kernel: raid6: int64x4 gen() 10618 MB/s Dec 16 12:14:45.436129 kernel: raid6: int64x2 gen() 8994 MB/s Dec 16 12:14:45.436134 kernel: raid6: int64x1 gen() 7059 MB/s Dec 16 12:14:45.436139 kernel: raid6: using algorithm neonx4 gen() 18581 MB/s Dec 16 12:14:45.436145 kernel: raid6: .... xor() 15138 MB/s, rmw enabled Dec 16 12:14:45.436150 kernel: raid6: using neon recovery algorithm Dec 16 12:14:45.436156 kernel: xor: measuring software checksum speed Dec 16 12:14:45.436161 kernel: 8regs : 28649 MB/sec Dec 16 12:14:45.436166 kernel: 32regs : 28763 MB/sec Dec 16 12:14:45.436171 kernel: arm64_neon : 37252 MB/sec Dec 16 12:14:45.436176 kernel: xor: using function: arm64_neon (37252 MB/sec) Dec 16 12:14:45.436182 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:14:45.436188 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (403) Dec 16 12:14:45.436193 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 12:14:45.436198 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:14:45.436204 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:14:45.436209 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:14:45.436214 kernel: loop: module loaded Dec 16 12:14:45.436220 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 12:14:45.436225 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:14:45.436231 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:14:45.436239 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:14:45.436245 systemd[1]: Detected virtualization microsoft. Dec 16 12:14:45.436250 systemd[1]: Detected architecture arm64. Dec 16 12:14:45.436256 systemd[1]: Running in initrd. Dec 16 12:14:45.436262 systemd[1]: No hostname configured, using default hostname. Dec 16 12:14:45.436268 systemd[1]: Hostname set to . Dec 16 12:14:45.436273 systemd[1]: Initializing machine ID from random generator. Dec 16 12:14:45.436279 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:14:45.436284 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:14:45.436290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:14:45.436296 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:14:45.436302 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:14:45.436308 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:14:45.436314 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:14:45.436320 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:14:45.436326 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:14:45.436332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:14:45.436338 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:14:45.436344 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:14:45.436349 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:14:45.436355 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:14:45.436360 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:14:45.436367 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:14:45.436372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:14:45.436378 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:14:45.436383 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:14:45.436389 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:14:45.436395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:14:45.436405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:14:45.436412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:14:45.436418 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:14:45.436424 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:14:45.436437 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:14:45.436445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:14:45.436466 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:14:45.436472 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:14:45.436478 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:14:45.436484 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:14:45.436490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:14:45.436511 systemd-journald[541]: Collecting audit messages is enabled. Dec 16 12:14:45.436526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:14:45.436533 systemd-journald[541]: Journal started Dec 16 12:14:45.436546 systemd-journald[541]: Runtime Journal (/run/log/journal/36a1411367444c57af23be94eec20fe5) is 8M, max 78.3M, 70.3M free. Dec 16 12:14:45.451594 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:14:45.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.463486 kernel: audit: type=1130 audit(1765887285.450:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.464560 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:14:45.484511 kernel: audit: type=1130 audit(1765887285.467:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.469302 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:14:45.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.490838 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:14:45.536338 kernel: audit: type=1130 audit(1765887285.489:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.536357 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:14:45.536370 kernel: audit: type=1130 audit(1765887285.509:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.512026 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:14:45.545061 kernel: Bridge firewalling registered Dec 16 12:14:45.545150 systemd-modules-load[544]: Inserted module 'br_netfilter' Dec 16 12:14:45.549912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:14:45.558920 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:14:45.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.566746 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:14:45.590679 kernel: audit: type=1130 audit(1765887285.564:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.577977 systemd-tmpfiles[554]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:14:45.590364 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:14:45.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.609205 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:14:45.641567 kernel: audit: type=1130 audit(1765887285.601:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.641591 kernel: audit: type=1130 audit(1765887285.629:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.642796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:14:45.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.666706 kernel: audit: type=1130 audit(1765887285.650:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.662537 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:14:45.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.672413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:14:45.692805 kernel: audit: type=1130 audit(1765887285.669:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.696000 audit: BPF prog-id=6 op=LOAD Dec 16 12:14:45.698283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:14:45.707414 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:14:45.723817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:14:45.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.738645 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:14:45.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.744526 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:14:45.824303 systemd-resolved[566]: Positive Trust Anchors: Dec 16 12:14:45.827292 systemd-resolved[566]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:14:45.841893 dracut-cmdline[581]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 12:14:45.827298 systemd-resolved[566]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:14:45.827318 systemd-resolved[566]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:14:45.873755 systemd-resolved[566]: Defaulting to hostname 'linux'. Dec 16 12:14:45.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:45.874374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:14:45.903143 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:14:45.991465 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:14:46.160481 kernel: iscsi: registered transport (tcp) Dec 16 12:14:46.200767 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:14:46.200785 kernel: QLogic iSCSI HBA Driver Dec 16 12:14:46.245142 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:14:46.263924 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:14:46.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.270157 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:14:46.310877 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:14:46.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.316409 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:14:46.336950 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:14:46.352485 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:14:46.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.360000 audit: BPF prog-id=7 op=LOAD Dec 16 12:14:46.360000 audit: BPF prog-id=8 op=LOAD Dec 16 12:14:46.362883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:14:46.446401 systemd-udevd[794]: Using default interface naming scheme 'v257'. Dec 16 12:14:46.450564 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:14:46.479669 kernel: kauditd_printk_skb: 9 callbacks suppressed Dec 16 12:14:46.479688 kernel: audit: type=1130 audit(1765887286.454:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.455861 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:14:46.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.496469 kernel: audit: type=1130 audit(1765887286.478:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.496415 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:14:46.506000 audit: BPF prog-id=9 op=LOAD Dec 16 12:14:46.511465 kernel: audit: type=1334 audit(1765887286.506:22): prog-id=9 op=LOAD Dec 16 12:14:46.516630 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:14:46.529398 dracut-pre-trigger[919]: rd.md=0: removing MD RAID activation Dec 16 12:14:46.548928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:14:46.577545 kernel: audit: type=1130 audit(1765887286.553:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.565804 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:14:46.577416 systemd-networkd[920]: lo: Link UP Dec 16 12:14:46.577419 systemd-networkd[920]: lo: Gained carrier Dec 16 12:14:46.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.582464 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:14:46.590762 systemd[1]: Reached target network.target - Network. Dec 16 12:14:46.615459 kernel: audit: type=1130 audit(1765887286.589:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.635999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:14:46.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.661470 kernel: audit: type=1130 audit(1765887286.641:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.670167 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:14:46.718968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:14:46.719811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:14:46.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.733841 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:14:46.760344 kernel: audit: type=1131 audit(1765887286.732:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.760362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#292 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:14:46.765477 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:14:46.765964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:14:46.781854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:14:46.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.781918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:14:46.824113 kernel: audit: type=1130 audit(1765887286.790:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.824132 kernel: audit: type=1131 audit(1765887286.790:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.803675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:14:46.837475 kernel: hv_netvsc 002248be-ab79-0022-48be-ab79002248be eth0: VF slot 1 added Dec 16 12:14:46.839352 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:14:46.865985 kernel: audit: type=1130 audit(1765887286.844:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.866009 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:14:46.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:46.849129 systemd-networkd[920]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:14:46.879365 kernel: hv_pci 13ec1b3f-5741-46b9-b74e-b0edc7a9434f: PCI VMBus probing: Using version 0x10004 Dec 16 12:14:46.849132 systemd-networkd[920]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:14:46.861769 systemd-networkd[920]: eth0: Link UP Dec 16 12:14:46.902359 kernel: hv_pci 13ec1b3f-5741-46b9-b74e-b0edc7a9434f: PCI host bridge to bus 5741:00 Dec 16 12:14:46.902513 kernel: pci_bus 5741:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:14:46.902638 kernel: pci_bus 5741:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:14:46.861886 systemd-networkd[920]: eth0: Gained carrier Dec 16 12:14:46.913498 kernel: pci 5741:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:14:46.861895 systemd-networkd[920]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:14:46.927843 kernel: pci 5741:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:14:46.922902 systemd-networkd[920]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:14:46.937549 kernel: pci 5741:00:02.0: enabling Extended Tags Dec 16 12:14:46.950493 kernel: pci 5741:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5741:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:14:46.960188 kernel: pci_bus 5741:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:14:46.960335 kernel: pci 5741:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:14:47.181219 kernel: mlx5_core 5741:00:02.0: enabling device (0000 -> 0002) Dec 16 12:14:47.188057 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:14:47.198651 kernel: mlx5_core 5741:00:02.0: PTM is not supported by PCIe Dec 16 12:14:47.198800 kernel: mlx5_core 5741:00:02.0: firmware version: 16.30.5006 Dec 16 12:14:47.199704 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:14:47.365497 kernel: hv_netvsc 002248be-ab79-0022-48be-ab79002248be eth0: VF registering: eth1 Dec 16 12:14:47.365680 kernel: mlx5_core 5741:00:02.0 eth1: joined to eth0 Dec 16 12:14:47.372493 kernel: mlx5_core 5741:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:14:47.380840 systemd-networkd[920]: eth1: Interface name change detected, renamed to enP22337s1. Dec 16 12:14:47.385718 kernel: mlx5_core 5741:00:02.0 enP22337s1: renamed from eth1 Dec 16 12:14:47.405179 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:14:47.420394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:14:47.441310 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:14:47.511465 kernel: mlx5_core 5741:00:02.0 enP22337s1: Link up Dec 16 12:14:47.549618 kernel: hv_netvsc 002248be-ab79-0022-48be-ab79002248be eth0: Data path switched to VF: enP22337s1 Dec 16 12:14:47.549323 systemd-networkd[920]: enP22337s1: Link UP Dec 16 12:14:47.567485 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:14:47.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:47.572149 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:14:47.581286 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:14:47.590887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:14:47.603582 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:14:47.625728 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:14:47.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:47.863648 systemd-networkd[920]: enP22337s1: Gained carrier Dec 16 12:14:47.904156 systemd-networkd[920]: eth0: Gained IPv6LL Dec 16 12:14:48.476901 disk-uuid[1026]: Warning: The kernel is still using the old partition table. Dec 16 12:14:48.476901 disk-uuid[1026]: The new table will be used at the next reboot or after you Dec 16 12:14:48.476901 disk-uuid[1026]: run partprobe(8) or kpartx(8) Dec 16 12:14:48.476901 disk-uuid[1026]: The operation has completed successfully. Dec 16 12:14:48.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:48.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:48.482636 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:14:48.482770 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:14:48.492151 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:14:48.563461 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1192) Dec 16 12:14:48.573153 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:14:48.573184 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:14:48.610217 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:14:48.610242 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:14:48.619484 kernel: BTRFS info (device sda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:14:48.620346 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:14:48.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:48.625356 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:14:49.544147 ignition[1211]: Ignition 2.24.0 Dec 16 12:14:49.544159 ignition[1211]: Stage: fetch-offline Dec 16 12:14:49.548033 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:14:49.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:49.544364 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:14:49.555902 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:14:49.544378 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:14:49.544471 ignition[1211]: parsed url from cmdline: "" Dec 16 12:14:49.544474 ignition[1211]: no config URL provided Dec 16 12:14:49.544478 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:14:49.544485 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:14:49.544489 ignition[1211]: failed to fetch config: resource requires networking Dec 16 12:14:49.544696 ignition[1211]: Ignition finished successfully Dec 16 12:14:49.587418 ignition[1218]: Ignition 2.24.0 Dec 16 12:14:49.587422 ignition[1218]: Stage: fetch Dec 16 12:14:49.587611 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:14:49.587619 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:14:49.587697 ignition[1218]: parsed url from cmdline: "" Dec 16 12:14:49.587700 ignition[1218]: no config URL provided Dec 16 12:14:49.587703 ignition[1218]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:14:49.587708 ignition[1218]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:14:49.587722 ignition[1218]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:14:49.649205 ignition[1218]: GET result: OK Dec 16 12:14:49.651543 ignition[1218]: config has been read from IMDS userdata Dec 16 12:14:49.651557 ignition[1218]: parsing config with SHA512: 1fda74f50ac9a9838ae3b4eb9b880de761dfe8cc245e08a34ddc6e6bd3ca67d0d885633faca1ecf01afce694a7fe35ba18981f2cf1c5fb98ba691ef3fc10a833 Dec 16 12:14:49.656421 unknown[1218]: fetched base config from "system" Dec 16 12:14:49.656434 unknown[1218]: fetched base config from "system" Dec 16 12:14:49.656744 ignition[1218]: fetch: fetch complete Dec 16 12:14:49.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:49.656438 unknown[1218]: fetched user config from "azure" Dec 16 12:14:49.656748 ignition[1218]: fetch: fetch passed Dec 16 12:14:49.661132 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:14:49.656793 ignition[1218]: Ignition finished successfully Dec 16 12:14:49.668682 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:14:49.699384 ignition[1224]: Ignition 2.24.0 Dec 16 12:14:49.699395 ignition[1224]: Stage: kargs Dec 16 12:14:49.701929 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:14:49.705616 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:14:49.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:49.701936 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:14:49.715569 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:14:49.702469 ignition[1224]: kargs: kargs passed Dec 16 12:14:49.702503 ignition[1224]: Ignition finished successfully Dec 16 12:14:49.735789 ignition[1230]: Ignition 2.24.0 Dec 16 12:14:49.735799 ignition[1230]: Stage: disks Dec 16 12:14:49.739314 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:14:49.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:49.735968 ignition[1230]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:14:49.745389 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:14:49.735976 ignition[1230]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:14:49.753160 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:14:49.736573 ignition[1230]: disks: disks passed Dec 16 12:14:49.762196 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:14:49.736606 ignition[1230]: Ignition finished successfully Dec 16 12:14:49.770265 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:14:49.778668 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:14:49.787541 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:14:49.902523 systemd-fsck[1238]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 12:14:49.909662 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:14:49.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:49.916395 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:14:50.197473 kernel: EXT4-fs (sda9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 12:14:50.198253 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:14:50.204767 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:14:50.239586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:14:50.254891 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:14:50.262110 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:14:50.272381 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:14:50.272408 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:14:50.278030 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:14:50.294584 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:14:50.313805 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1252) Dec 16 12:14:50.324025 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:14:50.324052 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:14:50.333964 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:14:50.333988 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:14:50.335081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:14:51.039880 coreos-metadata[1254]: Dec 16 12:14:51.039 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:14:51.045875 coreos-metadata[1254]: Dec 16 12:14:51.045 INFO Fetch successful Dec 16 12:14:51.049713 coreos-metadata[1254]: Dec 16 12:14:51.046 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:14:51.058103 coreos-metadata[1254]: Dec 16 12:14:51.054 INFO Fetch successful Dec 16 12:14:51.068267 coreos-metadata[1254]: Dec 16 12:14:51.068 INFO wrote hostname ci-4547.0.0-a-8648328498 to /sysroot/etc/hostname Dec 16 12:14:51.076497 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:14:51.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:52.453338 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:14:52.465713 kernel: kauditd_printk_skb: 11 callbacks suppressed Dec 16 12:14:52.465732 kernel: audit: type=1130 audit(1765887292.457:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:52.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:52.458808 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:14:52.490729 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:14:52.516577 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:14:52.528472 kernel: BTRFS info (device sda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:14:52.541660 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:14:52.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:52.554084 ignition[1357]: INFO : Ignition 2.24.0 Dec 16 12:14:52.566503 kernel: audit: type=1130 audit(1765887292.548:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:52.564328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:14:52.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:52.585231 ignition[1357]: INFO : Stage: mount Dec 16 12:14:52.585231 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:14:52.585231 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:14:52.585231 ignition[1357]: INFO : mount: mount passed Dec 16 12:14:52.585231 ignition[1357]: INFO : Ignition finished successfully Dec 16 12:14:52.610398 kernel: audit: type=1130 audit(1765887292.568:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:52.589361 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:14:52.611567 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:14:52.636557 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1366) Dec 16 12:14:52.645973 kernel: BTRFS info (device sda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 12:14:52.646013 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:14:52.655536 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:14:52.655573 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:14:52.657507 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:14:52.682741 ignition[1384]: INFO : Ignition 2.24.0 Dec 16 12:14:52.682741 ignition[1384]: INFO : Stage: files Dec 16 12:14:52.688689 ignition[1384]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:14:52.688689 ignition[1384]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:14:52.688689 ignition[1384]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:14:52.702245 ignition[1384]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:14:52.702245 ignition[1384]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:14:52.774606 ignition[1384]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:14:52.780301 ignition[1384]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:14:52.780301 ignition[1384]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:14:52.774942 unknown[1384]: wrote ssh authorized keys file for user: core Dec 16 12:14:52.805277 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:14:52.813144 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:14:53.004974 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:14:53.289200 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:14:53.352768 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:14:53.352768 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:14:53.352768 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:14:53.352768 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:14:53.352768 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:14:53.352768 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:14:53.890578 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:14:54.111036 ignition[1384]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:14:54.111036 ignition[1384]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:14:54.133395 ignition[1384]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:14:54.144426 ignition[1384]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:14:54.144426 ignition[1384]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:14:54.144426 ignition[1384]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:14:54.181497 kernel: audit: type=1130 audit(1765887294.155:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.181553 ignition[1384]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:14:54.181553 ignition[1384]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:14:54.181553 ignition[1384]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:14:54.181553 ignition[1384]: INFO : files: files passed Dec 16 12:14:54.181553 ignition[1384]: INFO : Ignition finished successfully Dec 16 12:14:54.152654 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:14:54.172051 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:14:54.247725 kernel: audit: type=1130 audit(1765887294.221:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.247743 kernel: audit: type=1131 audit(1765887294.221:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.188946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:14:54.205639 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:14:54.213536 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:14:54.263645 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:14:54.262621 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:14:54.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.291853 initrd-setup-root-after-ignition[1415]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:14:54.291853 initrd-setup-root-after-ignition[1415]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:14:54.314590 kernel: audit: type=1130 audit(1765887294.273:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.274714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:14:54.297577 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:14:54.338850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:14:54.339511 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:14:54.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.348124 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:14:54.388790 kernel: audit: type=1130 audit(1765887294.346:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.388809 kernel: audit: type=1131 audit(1765887294.346:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.378530 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:14:54.382844 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:14:54.383486 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:14:54.412815 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:14:54.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.418470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:14:54.442767 kernel: audit: type=1130 audit(1765887294.416:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.454384 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:14:54.454485 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:14:54.459004 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:14:54.467874 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:14:54.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.475845 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:14:54.475937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:14:54.488108 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:14:54.492472 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:14:54.499568 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:14:54.508367 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:14:54.516725 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:14:54.524822 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:14:54.533680 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:14:54.542260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:14:54.551479 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:14:54.559456 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:14:54.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.567743 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:14:54.575908 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:14:54.576019 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:14:54.587634 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:14:54.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.592123 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:14:54.599822 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:14:54.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.599883 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:14:54.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.608529 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:14:54.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.608622 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:14:54.621027 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:14:54.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.621165 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:14:54.630633 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:14:54.630744 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:14:54.638514 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:14:54.701506 ignition[1439]: INFO : Ignition 2.24.0 Dec 16 12:14:54.701506 ignition[1439]: INFO : Stage: umount Dec 16 12:14:54.701506 ignition[1439]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:14:54.701506 ignition[1439]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:14:54.701506 ignition[1439]: INFO : umount: umount passed Dec 16 12:14:54.701506 ignition[1439]: INFO : Ignition finished successfully Dec 16 12:14:54.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.638645 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:14:54.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.650557 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:14:54.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.658511 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:14:54.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.658730 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:14:54.683643 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:14:54.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.693704 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:14:54.693925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:14:54.703432 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:14:54.703518 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:14:54.711579 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:14:54.711652 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:14:54.725536 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:14:54.725713 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:14:54.739765 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:14:54.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.739843 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:14:54.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.746579 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:14:54.746614 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:14:54.756171 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:14:54.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.756208 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:14:54.764065 systemd[1]: Stopped target network.target - Network. Dec 16 12:14:54.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.771717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:14:54.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.771765 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:14:54.781236 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:14:54.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.788677 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:14:54.936000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:14:54.936000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:14:54.794488 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:14:54.807130 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:14:54.817468 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:14:54.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.825530 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:14:54.825575 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:14:54.833774 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:14:54.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.833803 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:14:54.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.842038 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:14:55.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.842056 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:14:54.849996 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:14:54.850043 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:14:54.857536 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:14:54.857567 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:14:54.865562 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:14:55.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.873766 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:14:54.883369 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:14:54.883925 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:14:55.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.883995 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:14:55.091191 kernel: hv_netvsc 002248be-ab79-0022-48be-ab79002248be eth0: Data path switched from VF: enP22337s1 Dec 16 12:14:54.892697 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:14:55.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.894467 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:14:54.905835 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:14:55.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.905899 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:14:54.919522 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:14:54.919589 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:14:54.935704 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:14:55.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.942692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:14:55.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.942726 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:14:55.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.954083 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:14:54.954132 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:14:55.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:55.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.963435 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:14:54.976512 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:14:55.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:14:54.976567 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:14:54.985780 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:14:54.985809 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:14:54.998543 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:14:54.998581 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:14:55.007885 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:14:55.037122 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:14:55.037246 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:14:55.047219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:14:55.047250 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:14:55.056815 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:14:55.056839 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:14:55.066181 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:14:55.066214 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:14:55.090540 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:14:55.090591 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:14:55.099543 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:14:55.099595 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:14:55.117544 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:14:55.129494 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:14:55.129546 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:14:55.143280 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:14:55.143318 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:14:55.153428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:14:55.153595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:14:55.163165 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:14:55.163256 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:14:55.180959 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:14:55.181047 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:14:55.189239 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:14:55.197815 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:14:55.221511 systemd[1]: Switching root. Dec 16 12:14:55.341748 systemd-journald[541]: Received SIGTERM from PID 1 (systemd). Dec 16 12:14:55.341773 systemd-journald[541]: Journal stopped Dec 16 12:15:00.120077 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:15:00.120096 kernel: SELinux: policy capability open_perms=1 Dec 16 12:15:00.120104 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:15:00.120110 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:15:00.120116 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:15:00.120122 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:15:00.120128 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:15:00.120134 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:15:00.120139 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:15:00.120146 systemd[1]: Successfully loaded SELinux policy in 147.312ms. Dec 16 12:15:00.120154 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.368ms. Dec 16 12:15:00.120161 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:15:00.120167 systemd[1]: Detected virtualization microsoft. Dec 16 12:15:00.120173 systemd[1]: Detected architecture arm64. Dec 16 12:15:00.120181 systemd[1]: Detected first boot. Dec 16 12:15:00.120187 systemd[1]: Hostname set to . Dec 16 12:15:00.120194 systemd[1]: Initializing machine ID from random generator. Dec 16 12:15:00.120200 zram_generator::config[1482]: No configuration found. Dec 16 12:15:00.120207 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:15:00.120214 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:15:00.120220 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 16 12:15:00.120226 kernel: audit: type=1334 audit(1765887299.331:95): prog-id=12 op=LOAD Dec 16 12:15:00.120232 kernel: audit: type=1334 audit(1765887299.331:96): prog-id=3 op=UNLOAD Dec 16 12:15:00.120237 kernel: audit: type=1334 audit(1765887299.334:97): prog-id=13 op=LOAD Dec 16 12:15:00.120244 kernel: audit: type=1334 audit(1765887299.339:98): prog-id=14 op=LOAD Dec 16 12:15:00.120250 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:15:00.120257 kernel: audit: type=1334 audit(1765887299.339:99): prog-id=4 op=UNLOAD Dec 16 12:15:00.120263 kernel: audit: type=1334 audit(1765887299.339:100): prog-id=5 op=UNLOAD Dec 16 12:15:00.120269 kernel: audit: type=1131 audit(1765887299.343:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.120275 kernel: audit: type=1334 audit(1765887299.373:102): prog-id=12 op=UNLOAD Dec 16 12:15:00.120282 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:15:00.120289 kernel: audit: type=1130 audit(1765887299.385:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.120295 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:15:00.120302 kernel: audit: type=1131 audit(1765887299.385:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.120308 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:15:00.120315 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:15:00.120322 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:15:00.120329 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:15:00.120335 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:15:00.120342 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:15:00.120350 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:15:00.120356 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:15:00.120363 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:15:00.120370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:15:00.120377 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:15:00.120384 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:15:00.120390 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:15:00.120397 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:15:00.120403 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:15:00.120410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:15:00.120417 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:15:00.120425 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:15:00.120431 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:15:00.120438 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:15:00.120444 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:15:00.120472 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:15:00.120481 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:15:00.120487 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:15:00.120494 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:15:00.120500 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:15:00.120507 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:15:00.120513 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:15:00.120521 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:15:00.120528 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:15:00.120534 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:15:00.120541 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:15:00.120548 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:15:00.120555 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:15:00.120561 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:15:00.120568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:15:00.120574 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:15:00.120581 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:15:00.120588 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:15:00.120596 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:15:00.120602 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:15:00.120609 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:15:00.120615 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:15:00.120622 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:15:00.120629 systemd[1]: Reached target machines.target - Containers. Dec 16 12:15:00.120635 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:15:00.120643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:15:00.120649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:15:00.120656 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:15:00.120663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:15:00.120669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:15:00.120676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:15:00.120683 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:15:00.120690 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:15:00.120697 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:15:00.120704 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:15:00.120710 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:15:00.120717 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:15:00.120841 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:15:00.120854 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:15:00.120861 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:15:00.120867 kernel: fuse: init (API version 7.41) Dec 16 12:15:00.120874 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:15:00.120881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:15:00.120887 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:15:00.120909 systemd-journald[1565]: Collecting audit messages is enabled. Dec 16 12:15:00.120927 systemd-journald[1565]: Journal started Dec 16 12:15:00.120942 systemd-journald[1565]: Runtime Journal (/run/log/journal/634b81ef2f7648f58a6804aaa8d88a94) is 8M, max 78.3M, 70.3M free. Dec 16 12:14:59.689000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:15:00.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.040000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:15:00.040000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:15:00.040000 audit: BPF prog-id=15 op=LOAD Dec 16 12:15:00.040000 audit: BPF prog-id=16 op=LOAD Dec 16 12:15:00.040000 audit: BPF prog-id=17 op=LOAD Dec 16 12:15:00.113000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:15:00.113000 audit[1565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd45a1210 a2=4000 a3=0 items=0 ppid=1 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:00.113000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:14:59.329293 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:14:59.340812 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:14:59.344825 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:14:59.345106 systemd[1]: systemd-journald.service: Consumed 2.397s CPU time. Dec 16 12:15:00.140855 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:15:00.140902 kernel: ACPI: bus type drm_connector registered Dec 16 12:15:00.160781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:15:00.169970 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:15:00.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.170777 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:15:00.175001 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:15:00.179790 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:15:00.183754 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:15:00.188222 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:15:00.192978 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:15:00.197354 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:15:00.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.202821 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:15:00.204509 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:15:00.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.209687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:15:00.209817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:15:00.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.214616 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:15:00.214728 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:15:00.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.219050 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:15:00.219164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:15:00.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.224246 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:15:00.224348 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:15:00.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.228830 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:15:00.228941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:15:00.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.233584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:15:00.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.238334 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:15:00.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.244194 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:15:00.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.249377 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:15:00.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.255065 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:15:00.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.267487 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:15:00.272320 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:15:00.278125 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:15:00.292528 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:15:00.297967 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:15:00.297990 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:15:00.302732 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:15:00.329358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:15:00.329467 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:15:00.330284 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:15:00.343902 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:15:00.348505 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:15:00.350577 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:15:00.355688 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:15:00.356516 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:15:00.361288 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:15:00.367528 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:15:00.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.372560 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:15:00.377246 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:15:00.384074 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:15:00.419762 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:15:00.424803 systemd-journald[1565]: Time spent on flushing to /var/log/journal/634b81ef2f7648f58a6804aaa8d88a94 is 11.021ms for 1079 entries. Dec 16 12:15:00.424803 systemd-journald[1565]: System Journal (/var/log/journal/634b81ef2f7648f58a6804aaa8d88a94) is 8M, max 2.2G, 2.2G free. Dec 16 12:15:00.456075 systemd-journald[1565]: Received client request to flush runtime journal. Dec 16 12:15:00.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.430221 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:15:00.435938 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:15:00.457207 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:15:00.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.462749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:15:00.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.475468 kernel: loop1: detected capacity change from 0 to 27544 Dec 16 12:15:00.505316 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:15:00.505852 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:15:00.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.537535 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:15:00.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.541000 audit: BPF prog-id=18 op=LOAD Dec 16 12:15:00.542000 audit: BPF prog-id=19 op=LOAD Dec 16 12:15:00.542000 audit: BPF prog-id=20 op=LOAD Dec 16 12:15:00.544570 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:15:00.551000 audit: BPF prog-id=21 op=LOAD Dec 16 12:15:00.556572 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:15:00.565586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:15:00.574000 audit: BPF prog-id=22 op=LOAD Dec 16 12:15:00.574000 audit: BPF prog-id=23 op=LOAD Dec 16 12:15:00.574000 audit: BPF prog-id=24 op=LOAD Dec 16 12:15:00.577634 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:15:00.581000 audit: BPF prog-id=25 op=LOAD Dec 16 12:15:00.581000 audit: BPF prog-id=26 op=LOAD Dec 16 12:15:00.581000 audit: BPF prog-id=27 op=LOAD Dec 16 12:15:00.583102 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:15:00.610804 systemd-nsresourced[1640]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:15:00.611561 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:15:00.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.619777 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:15:00.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.630832 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Dec 16 12:15:00.630846 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Dec 16 12:15:00.634788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:15:00.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.709077 systemd-oomd[1637]: No swap; memory pressure usage will be degraded Dec 16 12:15:00.709411 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:15:00.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.786519 systemd-resolved[1638]: Positive Trust Anchors: Dec 16 12:15:00.786775 systemd-resolved[1638]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:15:00.786826 systemd-resolved[1638]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:15:00.786851 systemd-resolved[1638]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:15:00.831473 kernel: loop2: detected capacity change from 0 to 45344 Dec 16 12:15:00.885625 systemd-resolved[1638]: Using system hostname 'ci-4547.0.0-a-8648328498'. Dec 16 12:15:00.886584 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:15:00.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.891689 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:15:00.924046 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:15:00.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:00.928000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:15:00.928000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:15:00.928000 audit: BPF prog-id=28 op=LOAD Dec 16 12:15:00.928000 audit: BPF prog-id=29 op=LOAD Dec 16 12:15:00.930877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:15:00.956022 systemd-udevd[1660]: Using default interface naming scheme 'v257'. Dec 16 12:15:01.190362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:15:01.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:01.199000 audit: BPF prog-id=30 op=LOAD Dec 16 12:15:01.201811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:15:01.263922 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:15:01.309978 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:15:01.310033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:15:01.337557 kernel: loop3: detected capacity change from 0 to 207008 Dec 16 12:15:01.354464 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:15:01.371356 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:15:01.371414 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:15:01.371432 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:15:01.371461 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:15:01.371484 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:15:01.396461 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:15:01.412155 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:15:01.413515 kernel: loop4: detected capacity change from 0 to 100192 Dec 16 12:15:01.420545 systemd-networkd[1678]: lo: Link UP Dec 16 12:15:01.420551 systemd-networkd[1678]: lo: Gained carrier Dec 16 12:15:01.421576 systemd-networkd[1678]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:15:01.421581 systemd-networkd[1678]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:15:01.421785 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:15:01.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:01.427000 systemd[1]: Reached target network.target - Network. Dec 16 12:15:01.432756 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:15:01.439579 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:15:01.462616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:15:01.472360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:15:01.474690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:15:01.479378 kernel: mlx5_core 5741:00:02.0 enP22337s1: Link up Dec 16 12:15:01.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:01.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:01.481536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:15:01.487620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:15:01.488019 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:15:01.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:01.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:01.498466 kernel: hv_netvsc 002248be-ab79-0022-48be-ab79002248be eth0: Data path switched to VF: enP22337s1 Dec 16 12:15:01.500189 systemd-networkd[1678]: enP22337s1: Link UP Dec 16 12:15:01.500350 systemd-networkd[1678]: eth0: Link UP Dec 16 12:15:01.500353 systemd-networkd[1678]: eth0: Gained carrier Dec 16 12:15:01.500365 systemd-networkd[1678]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:15:01.500398 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:15:01.507716 systemd-networkd[1678]: enP22337s1: Gained carrier Dec 16 12:15:01.513525 systemd-networkd[1678]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:15:01.514703 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:15:01.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:01.578487 kernel: MACsec IEEE 802.1AE Dec 16 12:15:01.585485 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:15:01.591037 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:15:01.672333 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:15:01.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:01.816467 kernel: loop5: detected capacity change from 0 to 27544 Dec 16 12:15:01.827467 kernel: loop6: detected capacity change from 0 to 45344 Dec 16 12:15:01.838475 kernel: loop7: detected capacity change from 0 to 207008 Dec 16 12:15:01.851518 kernel: loop1: detected capacity change from 0 to 100192 Dec 16 12:15:01.862656 (sd-merge)[1792]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 12:15:01.864774 (sd-merge)[1792]: Merged extensions into '/usr'. Dec 16 12:15:01.867526 systemd[1]: Reload requested from client PID 1622 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:15:01.867539 systemd[1]: Reloading... Dec 16 12:15:01.919475 zram_generator::config[1826]: No configuration found. Dec 16 12:15:02.085412 systemd[1]: Reloading finished in 217 ms. Dec 16 12:15:02.120464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:15:02.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.126036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:15:02.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.137322 systemd[1]: Starting ensure-sysext.service... Dec 16 12:15:02.142569 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:15:02.146000 audit: BPF prog-id=31 op=LOAD Dec 16 12:15:02.146000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:15:02.146000 audit: BPF prog-id=32 op=LOAD Dec 16 12:15:02.146000 audit: BPF prog-id=33 op=LOAD Dec 16 12:15:02.146000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:15:02.146000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:15:02.146000 audit: BPF prog-id=34 op=LOAD Dec 16 12:15:02.146000 audit: BPF prog-id=35 op=LOAD Dec 16 12:15:02.146000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:15:02.146000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:15:02.146000 audit: BPF prog-id=36 op=LOAD Dec 16 12:15:02.146000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:15:02.146000 audit: BPF prog-id=37 op=LOAD Dec 16 12:15:02.146000 audit: BPF prog-id=38 op=LOAD Dec 16 12:15:02.146000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:15:02.146000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:15:02.147000 audit: BPF prog-id=39 op=LOAD Dec 16 12:15:02.147000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:15:02.147000 audit: BPF prog-id=40 op=LOAD Dec 16 12:15:02.147000 audit: BPF prog-id=41 op=LOAD Dec 16 12:15:02.147000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:15:02.147000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:15:02.147000 audit: BPF prog-id=42 op=LOAD Dec 16 12:15:02.147000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:15:02.147000 audit: BPF prog-id=43 op=LOAD Dec 16 12:15:02.147000 audit: BPF prog-id=44 op=LOAD Dec 16 12:15:02.147000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:15:02.147000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:15:02.148000 audit: BPF prog-id=45 op=LOAD Dec 16 12:15:02.148000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:15:02.148000 audit: BPF prog-id=46 op=LOAD Dec 16 12:15:02.148000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:15:02.155604 systemd[1]: Reload requested from client PID 1882 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:15:02.155618 systemd[1]: Reloading... Dec 16 12:15:02.157058 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:15:02.157352 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:15:02.157945 systemd-tmpfiles[1883]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:15:02.159213 systemd-tmpfiles[1883]: ACLs are not supported, ignoring. Dec 16 12:15:02.159344 systemd-tmpfiles[1883]: ACLs are not supported, ignoring. Dec 16 12:15:02.180243 systemd-tmpfiles[1883]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:15:02.180251 systemd-tmpfiles[1883]: Skipping /boot Dec 16 12:15:02.186863 systemd-tmpfiles[1883]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:15:02.186948 systemd-tmpfiles[1883]: Skipping /boot Dec 16 12:15:02.212469 zram_generator::config[1917]: No configuration found. Dec 16 12:15:02.365443 systemd[1]: Reloading finished in 209 ms. Dec 16 12:15:02.376517 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:15:02.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.382000 audit: BPF prog-id=47 op=LOAD Dec 16 12:15:02.382000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:15:02.382000 audit: BPF prog-id=48 op=LOAD Dec 16 12:15:02.382000 audit: BPF prog-id=49 op=LOAD Dec 16 12:15:02.382000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:15:02.382000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:15:02.383000 audit: BPF prog-id=50 op=LOAD Dec 16 12:15:02.383000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:15:02.383000 audit: BPF prog-id=51 op=LOAD Dec 16 12:15:02.383000 audit: BPF prog-id=52 op=LOAD Dec 16 12:15:02.383000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:15:02.383000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:15:02.383000 audit: BPF prog-id=53 op=LOAD Dec 16 12:15:02.383000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:15:02.383000 audit: BPF prog-id=54 op=LOAD Dec 16 12:15:02.383000 audit: BPF prog-id=55 op=LOAD Dec 16 12:15:02.383000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:15:02.383000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:15:02.384000 audit: BPF prog-id=56 op=LOAD Dec 16 12:15:02.384000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:15:02.384000 audit: BPF prog-id=57 op=LOAD Dec 16 12:15:02.384000 audit: BPF prog-id=58 op=LOAD Dec 16 12:15:02.384000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:15:02.384000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:15:02.384000 audit: BPF prog-id=59 op=LOAD Dec 16 12:15:02.384000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:15:02.384000 audit: BPF prog-id=60 op=LOAD Dec 16 12:15:02.384000 audit: BPF prog-id=61 op=LOAD Dec 16 12:15:02.384000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:15:02.384000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:15:02.385000 audit: BPF prog-id=62 op=LOAD Dec 16 12:15:02.385000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:15:02.401841 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:15:02.412104 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:15:02.417328 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:15:02.422138 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:15:02.429631 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:15:02.439499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:15:02.440287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:15:02.443000 audit[1978]: SYSTEM_BOOT pid=1978 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.446705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:15:02.458808 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:15:02.463389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:15:02.463579 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:15:02.463653 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:15:02.464407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:15:02.464637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:15:02.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.470116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:15:02.470287 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:15:02.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.476389 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:15:02.476540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:15:02.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.489249 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:15:02.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.496187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:15:02.497597 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:15:02.506632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:15:02.520931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:15:02.525158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:15:02.525514 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:15:02.525704 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:15:02.526608 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:15:02.526846 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:15:02.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.533344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:15:02.533526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:15:02.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.539829 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:15:02.540287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:15:02.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.546551 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:15:02.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.556025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:15:02.557080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:15:02.565627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:15:02.570905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:15:02.578334 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:15:02.582530 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:15:02.582661 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:15:02.582732 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:15:02.582832 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:15:02.587582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:15:02.591544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:15:02.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.596600 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:15:02.596739 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:15:02.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.601244 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:15:02.601373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:15:02.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.606612 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:15:02.606740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:15:02.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.613171 systemd[1]: Finished ensure-sysext.service. Dec 16 12:15:02.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:02.619003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:15:02.619063 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:15:02.916000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:15:02.916000 audit[2021]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcb6eb130 a2=420 a3=0 items=0 ppid=1974 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:02.916000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:15:02.918181 augenrules[2021]: No rules Dec 16 12:15:02.919286 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:15:02.919567 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:15:03.240803 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:15:03.246092 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:15:03.520607 systemd-networkd[1678]: eth0: Gained IPv6LL Dec 16 12:15:03.526834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:15:03.534080 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:15:08.762270 ldconfig[1976]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:15:08.772745 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:15:08.780054 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:15:08.794234 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:15:08.798981 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:15:08.803201 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:15:08.807990 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:15:08.813128 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:15:08.817622 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:15:08.822750 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:15:08.828040 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:15:08.832483 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:15:08.837564 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:15:08.837587 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:15:08.841252 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:15:08.846275 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:15:08.851433 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:15:08.856910 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:15:08.862359 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:15:08.867337 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:15:08.872895 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:15:08.877318 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:15:08.882617 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:15:08.886941 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:15:08.890858 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:15:08.894445 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:15:08.894475 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:15:08.896376 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:15:08.909544 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:15:08.915056 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:15:08.925699 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:15:08.929230 chronyd[2034]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:15:08.932659 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:15:08.938222 chronyd[2034]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:15:08.938356 chronyd[2034]: Loaded seccomp filter (level 2) Dec 16 12:15:08.943006 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:15:08.947718 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:15:08.951757 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:15:08.954098 jq[2042]: false Dec 16 12:15:08.954584 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:15:08.956440 KVP[2044]: KVP starting; pid is:2044 Dec 16 12:15:08.960919 KVP[2044]: KVP LIC Version: 3.1 Dec 16 12:15:08.961495 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:15:08.961834 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:15:08.962735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:08.968367 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:15:08.974492 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:15:08.979810 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:15:08.986336 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:15:08.992498 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:15:09.001100 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:15:09.005948 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:15:09.006573 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:15:09.007306 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:15:09.011167 extend-filesystems[2043]: Found /dev/sda6 Dec 16 12:15:09.014154 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:15:09.019306 jq[2064]: true Dec 16 12:15:09.024534 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:15:09.030633 extend-filesystems[2043]: Found /dev/sda9 Dec 16 12:15:09.033637 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:15:09.041091 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:15:09.041707 extend-filesystems[2043]: Checking size of /dev/sda9 Dec 16 12:15:09.041550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:15:09.043752 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:15:09.053772 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:15:09.053949 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:15:09.082680 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:15:09.085106 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:15:09.093480 extend-filesystems[2043]: Resized partition /dev/sda9 Dec 16 12:15:09.102911 jq[2081]: true Dec 16 12:15:09.102985 update_engine[2061]: I20251216 12:15:09.098644 2061 main.cc:92] Flatcar Update Engine starting Dec 16 12:15:09.109294 extend-filesystems[2100]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:15:09.140172 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 12:15:09.140631 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Dec 16 12:15:09.146315 tar[2080]: linux-arm64/LICENSE Dec 16 12:15:09.161422 tar[2080]: linux-arm64/helm Dec 16 12:15:09.171076 systemd-logind[2058]: New seat seat0. Dec 16 12:15:09.171998 extend-filesystems[2100]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 12:15:09.171998 extend-filesystems[2100]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 12:15:09.171998 extend-filesystems[2100]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Dec 16 12:15:09.238938 extend-filesystems[2043]: Resized filesystem in /dev/sda9 Dec 16 12:15:09.176674 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:15:09.271267 bash[2120]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:15:09.176906 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:15:09.179284 systemd-logind[2058]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:15:09.191987 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:15:09.231778 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:15:09.258843 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:15:09.278247 dbus-daemon[2037]: [system] SELinux support is enabled Dec 16 12:15:09.278497 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:15:09.286685 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:15:09.286718 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:15:09.294891 update_engine[2061]: I20251216 12:15:09.292703 2061 update_check_scheduler.cc:74] Next update check in 9m56s Dec 16 12:15:09.293987 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:15:09.294004 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:15:09.303990 dbus-daemon[2037]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:15:09.304128 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:15:09.328489 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:15:09.378214 coreos-metadata[2036]: Dec 16 12:15:09.378 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:15:09.383015 coreos-metadata[2036]: Dec 16 12:15:09.382 INFO Fetch successful Dec 16 12:15:09.383015 coreos-metadata[2036]: Dec 16 12:15:09.382 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:15:09.387141 coreos-metadata[2036]: Dec 16 12:15:09.387 INFO Fetch successful Dec 16 12:15:09.388584 coreos-metadata[2036]: Dec 16 12:15:09.388 INFO Fetching http://168.63.129.16/machine/76e79b02-fc69-4c97-b472-50e7e38e23de/442880b7%2D7fb0%2D4e94%2D8c7e%2D164dc67229b7.%5Fci%2D4547.0.0%2Da%2D8648328498?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:15:09.391333 coreos-metadata[2036]: Dec 16 12:15:09.391 INFO Fetch successful Dec 16 12:15:09.391333 coreos-metadata[2036]: Dec 16 12:15:09.391 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:15:09.403196 coreos-metadata[2036]: Dec 16 12:15:09.403 INFO Fetch successful Dec 16 12:15:09.455961 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:15:09.462810 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:15:09.469207 sshd_keygen[2060]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:15:09.488917 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:15:09.495938 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:15:09.503423 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:15:09.527246 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:15:09.527475 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:15:09.541410 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:15:09.560052 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:15:09.582963 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:15:09.596770 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:15:09.607601 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:15:09.615734 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:15:09.619386 tar[2080]: linux-arm64/README.md Dec 16 12:15:09.626660 locksmithd[2160]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:15:09.636537 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:15:09.759287 containerd[2086]: time="2025-12-16T12:15:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:15:09.760404 containerd[2086]: time="2025-12-16T12:15:09.759774452Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:15:09.767842 containerd[2086]: time="2025-12-16T12:15:09.767813548Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.6µs" Dec 16 12:15:09.768146 containerd[2086]: time="2025-12-16T12:15:09.768125756Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:15:09.768233 containerd[2086]: time="2025-12-16T12:15:09.768220692Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:15:09.768276 containerd[2086]: time="2025-12-16T12:15:09.768266332Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:15:09.768562 containerd[2086]: time="2025-12-16T12:15:09.768542332Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:15:09.768621 containerd[2086]: time="2025-12-16T12:15:09.768611244Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:15:09.768719 containerd[2086]: time="2025-12-16T12:15:09.768705868Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:15:09.769073 containerd[2086]: time="2025-12-16T12:15:09.769054300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:15:09.769367 containerd[2086]: time="2025-12-16T12:15:09.769348252Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:15:09.769421 containerd[2086]: time="2025-12-16T12:15:09.769411396Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:15:09.769469 containerd[2086]: time="2025-12-16T12:15:09.769459884Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:15:09.769510 containerd[2086]: time="2025-12-16T12:15:09.769497908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:15:09.769940 containerd[2086]: time="2025-12-16T12:15:09.769922780Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:15:09.769998 containerd[2086]: time="2025-12-16T12:15:09.769986716Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:15:09.770207 containerd[2086]: time="2025-12-16T12:15:09.770190788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:15:09.770420 containerd[2086]: time="2025-12-16T12:15:09.770404716Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:15:09.771044 containerd[2086]: time="2025-12-16T12:15:09.771020244Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:15:09.771127 containerd[2086]: time="2025-12-16T12:15:09.771112260Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:15:09.771196 containerd[2086]: time="2025-12-16T12:15:09.771183852Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:15:09.771504 containerd[2086]: time="2025-12-16T12:15:09.771439892Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:15:09.771625 containerd[2086]: time="2025-12-16T12:15:09.771611164Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:15:09.782049 containerd[2086]: time="2025-12-16T12:15:09.781986132Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:15:09.782049 containerd[2086]: time="2025-12-16T12:15:09.782025268Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:15:09.883668 containerd[2086]: time="2025-12-16T12:15:09.883438244Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:15:09.883668 containerd[2086]: time="2025-12-16T12:15:09.883653204Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:15:09.883668 containerd[2086]: time="2025-12-16T12:15:09.883679548Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883690196Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883699580Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883706028Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883714604Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883725412Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883732772Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883739204Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883745740Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:15:09.883843 containerd[2086]: time="2025-12-16T12:15:09.883776460Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:15:09.883948 containerd[2086]: time="2025-12-16T12:15:09.883929244Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:15:09.883948 containerd[2086]: time="2025-12-16T12:15:09.883945620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:15:09.883972 containerd[2086]: time="2025-12-16T12:15:09.883956476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:15:09.883972 containerd[2086]: time="2025-12-16T12:15:09.883963764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:15:09.883972 containerd[2086]: time="2025-12-16T12:15:09.883970628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:15:09.884010 containerd[2086]: time="2025-12-16T12:15:09.883976916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:15:09.884010 containerd[2086]: time="2025-12-16T12:15:09.883990068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:15:09.884010 containerd[2086]: time="2025-12-16T12:15:09.883996900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:15:09.884010 containerd[2086]: time="2025-12-16T12:15:09.884004748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:15:09.884054 containerd[2086]: time="2025-12-16T12:15:09.884011444Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:15:09.884054 containerd[2086]: time="2025-12-16T12:15:09.884017436Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:15:09.884054 containerd[2086]: time="2025-12-16T12:15:09.884040124Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:15:09.884085 containerd[2086]: time="2025-12-16T12:15:09.884074452Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:15:09.884085 containerd[2086]: time="2025-12-16T12:15:09.884083956Z" level=info msg="Start snapshots syncer" Dec 16 12:15:09.884110 containerd[2086]: time="2025-12-16T12:15:09.884097948Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:15:09.884701 containerd[2086]: time="2025-12-16T12:15:09.884292492Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:15:09.884701 containerd[2086]: time="2025-12-16T12:15:09.884337212Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:15:09.884787 containerd[2086]: time="2025-12-16T12:15:09.884368724Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:15:09.885629 containerd[2086]: time="2025-12-16T12:15:09.885603756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:15:09.885654 containerd[2086]: time="2025-12-16T12:15:09.885636156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:15:09.885654 containerd[2086]: time="2025-12-16T12:15:09.885645068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:15:09.885654 containerd[2086]: time="2025-12-16T12:15:09.885651692Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:15:09.885709 containerd[2086]: time="2025-12-16T12:15:09.885661068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:15:09.885709 containerd[2086]: time="2025-12-16T12:15:09.885679756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:15:09.885709 containerd[2086]: time="2025-12-16T12:15:09.885687476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:15:09.885709 containerd[2086]: time="2025-12-16T12:15:09.885694068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:15:09.885709 containerd[2086]: time="2025-12-16T12:15:09.885701804Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:15:09.885763 containerd[2086]: time="2025-12-16T12:15:09.885732308Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:15:09.885763 containerd[2086]: time="2025-12-16T12:15:09.885741900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885787676Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885799932Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885805036Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885811220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885817948Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885829460Z" level=info msg="runtime interface created" Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885832756Z" level=info msg="created NRI interface" Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885837548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885846508Z" level=info msg="Connect containerd service" Dec 16 12:15:09.885908 containerd[2086]: time="2025-12-16T12:15:09.885870740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:15:09.886818 containerd[2086]: time="2025-12-16T12:15:09.886656044Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:15:09.950748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:09.958936 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:15:10.256958 containerd[2086]: time="2025-12-16T12:15:10.256902676Z" level=info msg="Start subscribing containerd event" Dec 16 12:15:10.257112 containerd[2086]: time="2025-12-16T12:15:10.256980356Z" level=info msg="Start recovering state" Dec 16 12:15:10.257112 containerd[2086]: time="2025-12-16T12:15:10.257068348Z" level=info msg="Start event monitor" Dec 16 12:15:10.257112 containerd[2086]: time="2025-12-16T12:15:10.257080244Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:15:10.257112 containerd[2086]: time="2025-12-16T12:15:10.257085444Z" level=info msg="Start streaming server" Dec 16 12:15:10.257112 containerd[2086]: time="2025-12-16T12:15:10.257091220Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:15:10.257112 containerd[2086]: time="2025-12-16T12:15:10.257096260Z" level=info msg="runtime interface starting up..." Dec 16 12:15:10.257112 containerd[2086]: time="2025-12-16T12:15:10.257099612Z" level=info msg="starting plugins..." Dec 16 12:15:10.257112 containerd[2086]: time="2025-12-16T12:15:10.257109788Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:15:10.259369 containerd[2086]: time="2025-12-16T12:15:10.258315668Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:15:10.259369 containerd[2086]: time="2025-12-16T12:15:10.258359612Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:15:10.263974 containerd[2086]: time="2025-12-16T12:15:10.259572884Z" level=info msg="containerd successfully booted in 0.500582s" Dec 16 12:15:10.259621 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:15:10.267300 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:15:10.275509 systemd[1]: Startup finished in 2.981s (kernel) + 11.816s (initrd) + 14.133s (userspace) = 28.931s. Dec 16 12:15:10.310074 kubelet[2252]: E1216 12:15:10.310026 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:15:10.312601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:15:10.312696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:15:10.313876 systemd[1]: kubelet.service: Consumed 541ms CPU time, 254.5M memory peak. Dec 16 12:15:10.707990 login[2224]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:10.707990 login[2223]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:10.716798 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:15:10.718223 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:15:10.720751 systemd-logind[2058]: New session 2 of user core. Dec 16 12:15:10.724463 systemd-logind[2058]: New session 1 of user core. Dec 16 12:15:10.748525 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:15:10.750135 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:15:10.759208 (systemd)[2272]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:10.762634 systemd-logind[2058]: New session 3 of user core. Dec 16 12:15:10.896475 systemd[2272]: Queued start job for default target default.target. Dec 16 12:15:10.904295 systemd[2272]: Created slice app.slice - User Application Slice. Dec 16 12:15:10.904326 systemd[2272]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:15:10.904335 systemd[2272]: Reached target paths.target - Paths. Dec 16 12:15:10.904371 systemd[2272]: Reached target timers.target - Timers. Dec 16 12:15:10.905302 systemd[2272]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:15:10.905821 systemd[2272]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:15:10.913031 systemd[2272]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:15:10.917771 systemd[2272]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:15:10.917811 systemd[2272]: Reached target sockets.target - Sockets. Dec 16 12:15:10.917840 systemd[2272]: Reached target basic.target - Basic System. Dec 16 12:15:10.917862 systemd[2272]: Reached target default.target - Main User Target. Dec 16 12:15:10.917880 systemd[2272]: Startup finished in 151ms. Dec 16 12:15:10.918091 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:15:10.923555 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:15:10.924054 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:15:11.192109 waagent[2221]: 2025-12-16T12:15:11.192036Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:15:11.199751 waagent[2221]: 2025-12-16T12:15:11.196532Z INFO Daemon Daemon OS: flatcar 4547.0.0 Dec 16 12:15:11.199957 waagent[2221]: 2025-12-16T12:15:11.199914Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:15:11.203315 waagent[2221]: 2025-12-16T12:15:11.203279Z INFO Daemon Daemon Run daemon Dec 16 12:15:11.206174 waagent[2221]: 2025-12-16T12:15:11.206143Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Dec 16 12:15:11.212836 waagent[2221]: 2025-12-16T12:15:11.212795Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:15:11.216519 waagent[2221]: 2025-12-16T12:15:11.216486Z INFO Daemon Daemon Activate resource disk Dec 16 12:15:11.219874 waagent[2221]: 2025-12-16T12:15:11.219845Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:15:11.227792 waagent[2221]: 2025-12-16T12:15:11.227758Z INFO Daemon Daemon Found device: None Dec 16 12:15:11.230792 waagent[2221]: 2025-12-16T12:15:11.230766Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:15:11.236652 waagent[2221]: 2025-12-16T12:15:11.236623Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:15:11.244503 waagent[2221]: 2025-12-16T12:15:11.244442Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:15:11.248399 waagent[2221]: 2025-12-16T12:15:11.248369Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:15:11.257415 waagent[2221]: 2025-12-16T12:15:11.257361Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:15:11.266784 waagent[2221]: 2025-12-16T12:15:11.266746Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:15:11.273200 waagent[2221]: 2025-12-16T12:15:11.273170Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:15:11.276600 waagent[2221]: 2025-12-16T12:15:11.276577Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:15:11.335967 waagent[2221]: 2025-12-16T12:15:11.335906Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:15:11.361788 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:15:11.363917 waagent[2221]: 2025-12-16T12:15:11.363877Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:15:11.367319 waagent[2221]: 2025-12-16T12:15:11.367288Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:15:11.371279 waagent[2221]: 2025-12-16T12:15:11.371252Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:15:11.375937 waagent[2221]: 2025-12-16T12:15:11.375910Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:15:11.379822 waagent[2221]: 2025-12-16T12:15:11.379796Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:15:11.383353 waagent[2221]: 2025-12-16T12:15:11.383330Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:15:11.394848 waagent[2221]: 2025-12-16T12:15:11.394817Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:15:11.399516 waagent[2221]: 2025-12-16T12:15:11.399496Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:15:11.403325 waagent[2221]: 2025-12-16T12:15:11.403304Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:15:11.552341 waagent[2221]: 2025-12-16T12:15:11.552265Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:15:11.557110 waagent[2221]: 2025-12-16T12:15:11.557078Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:15:11.563868 waagent[2221]: 2025-12-16T12:15:11.563833Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:15:11.582583 waagent[2221]: 2025-12-16T12:15:11.582555Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:15:11.586671 waagent[2221]: 2025-12-16T12:15:11.586638Z INFO Daemon Dec 16 12:15:11.588704 waagent[2221]: 2025-12-16T12:15:11.588678Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6ba781d3-44d4-48ef-8b04-b0359df0d22e eTag: 11263005687868766596 source: Fabric] Dec 16 12:15:11.596737 waagent[2221]: 2025-12-16T12:15:11.596707Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:15:11.601643 waagent[2221]: 2025-12-16T12:15:11.601615Z INFO Daemon Dec 16 12:15:11.603650 waagent[2221]: 2025-12-16T12:15:11.603624Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:15:11.611383 waagent[2221]: 2025-12-16T12:15:11.611354Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:15:11.720078 waagent[2221]: 2025-12-16T12:15:11.720025Z INFO Daemon Downloaded certificate {'thumbprint': '03D660BAF3305530E51EE0F7F2D9F709188783C1', 'hasPrivateKey': True} Dec 16 12:15:11.727081 waagent[2221]: 2025-12-16T12:15:11.727047Z INFO Daemon Fetch goal state completed Dec 16 12:15:11.762961 waagent[2221]: 2025-12-16T12:15:11.762930Z INFO Daemon Daemon Starting provisioning Dec 16 12:15:11.766571 waagent[2221]: 2025-12-16T12:15:11.766540Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:15:11.769940 waagent[2221]: 2025-12-16T12:15:11.769917Z INFO Daemon Daemon Set hostname [ci-4547.0.0-a-8648328498] Dec 16 12:15:11.776210 waagent[2221]: 2025-12-16T12:15:11.776176Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-a-8648328498] Dec 16 12:15:11.780674 waagent[2221]: 2025-12-16T12:15:11.780642Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:15:11.785045 waagent[2221]: 2025-12-16T12:15:11.785017Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:15:11.794062 systemd-networkd[1678]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:15:11.794070 systemd-networkd[1678]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:15:11.794134 systemd-networkd[1678]: eth0: DHCP lease lost Dec 16 12:15:11.803637 waagent[2221]: 2025-12-16T12:15:11.803566Z INFO Daemon Daemon Create user account if not exists Dec 16 12:15:11.807553 waagent[2221]: 2025-12-16T12:15:11.807520Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:15:11.811796 waagent[2221]: 2025-12-16T12:15:11.811757Z INFO Daemon Daemon Configure sudoer Dec 16 12:15:11.818431 waagent[2221]: 2025-12-16T12:15:11.818392Z INFO Daemon Daemon Configure sshd Dec 16 12:15:11.821519 systemd-networkd[1678]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:15:11.824576 waagent[2221]: 2025-12-16T12:15:11.824538Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:15:11.833840 waagent[2221]: 2025-12-16T12:15:11.833809Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:15:12.939812 waagent[2221]: 2025-12-16T12:15:12.939763Z INFO Daemon Daemon Provisioning complete Dec 16 12:15:12.954727 waagent[2221]: 2025-12-16T12:15:12.954691Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:15:12.959630 waagent[2221]: 2025-12-16T12:15:12.959599Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:15:12.967201 waagent[2221]: 2025-12-16T12:15:12.967177Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:15:13.063142 waagent[2325]: 2025-12-16T12:15:13.063081Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:15:13.064469 waagent[2325]: 2025-12-16T12:15:13.063528Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Dec 16 12:15:13.064469 waagent[2325]: 2025-12-16T12:15:13.063581Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:15:13.064469 waagent[2325]: 2025-12-16T12:15:13.063617Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:15:13.097590 waagent[2325]: 2025-12-16T12:15:13.097551Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:15:13.097798 waagent[2325]: 2025-12-16T12:15:13.097770Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:15:13.097908 waagent[2325]: 2025-12-16T12:15:13.097883Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:15:13.103193 waagent[2325]: 2025-12-16T12:15:13.103147Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:15:13.107782 waagent[2325]: 2025-12-16T12:15:13.107747Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:15:13.108193 waagent[2325]: 2025-12-16T12:15:13.108160Z INFO ExtHandler Dec 16 12:15:13.108335 waagent[2325]: 2025-12-16T12:15:13.108310Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7161d112-771b-4571-95d8-f358415bb9af eTag: 11263005687868766596 source: Fabric] Dec 16 12:15:13.108662 waagent[2325]: 2025-12-16T12:15:13.108630Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:15:13.109153 waagent[2325]: 2025-12-16T12:15:13.109120Z INFO ExtHandler Dec 16 12:15:13.109251 waagent[2325]: 2025-12-16T12:15:13.109232Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:15:13.112473 waagent[2325]: 2025-12-16T12:15:13.112424Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:15:13.163910 waagent[2325]: 2025-12-16T12:15:13.163867Z INFO ExtHandler Downloaded certificate {'thumbprint': '03D660BAF3305530E51EE0F7F2D9F709188783C1', 'hasPrivateKey': True} Dec 16 12:15:13.164391 waagent[2325]: 2025-12-16T12:15:13.164362Z INFO ExtHandler Fetch goal state completed Dec 16 12:15:13.176058 waagent[2325]: 2025-12-16T12:15:13.175683Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Dec 16 12:15:13.178858 waagent[2325]: 2025-12-16T12:15:13.178822Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2325 Dec 16 12:15:13.179028 waagent[2325]: 2025-12-16T12:15:13.179003Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:15:13.179332 waagent[2325]: 2025-12-16T12:15:13.179305Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:15:13.180478 waagent[2325]: 2025-12-16T12:15:13.180431Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:15:13.180855 waagent[2325]: 2025-12-16T12:15:13.180826Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:15:13.181051 waagent[2325]: 2025-12-16T12:15:13.181025Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:15:13.181558 waagent[2325]: 2025-12-16T12:15:13.181530Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:15:13.278729 waagent[2325]: 2025-12-16T12:15:13.278340Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:15:13.278729 waagent[2325]: 2025-12-16T12:15:13.278541Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:15:13.282983 waagent[2325]: 2025-12-16T12:15:13.282963Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:15:13.287662 systemd[1]: Reload requested from client PID 2340 ('systemctl') (unit waagent.service)... Dec 16 12:15:13.287676 systemd[1]: Reloading... Dec 16 12:15:13.352474 zram_generator::config[2378]: No configuration found. Dec 16 12:15:13.512734 systemd[1]: Reloading finished in 224 ms. Dec 16 12:15:13.537264 waagent[2325]: 2025-12-16T12:15:13.536608Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:15:13.537264 waagent[2325]: 2025-12-16T12:15:13.536741Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:15:13.832694 waagent[2325]: 2025-12-16T12:15:13.832586Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:15:13.833063 waagent[2325]: 2025-12-16T12:15:13.833031Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:15:13.833784 waagent[2325]: 2025-12-16T12:15:13.833746Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:15:13.833889 waagent[2325]: 2025-12-16T12:15:13.833855Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:15:13.834063 waagent[2325]: 2025-12-16T12:15:13.834039Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:15:13.834231 waagent[2325]: 2025-12-16T12:15:13.834203Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:15:13.834532 waagent[2325]: 2025-12-16T12:15:13.834496Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:15:13.834647 waagent[2325]: 2025-12-16T12:15:13.834617Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:15:13.834647 waagent[2325]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:15:13.834647 waagent[2325]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:15:13.834647 waagent[2325]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:15:13.834647 waagent[2325]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:15:13.834647 waagent[2325]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:15:13.834647 waagent[2325]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:15:13.835221 waagent[2325]: 2025-12-16T12:15:13.835188Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:15:13.835301 waagent[2325]: 2025-12-16T12:15:13.835266Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:15:13.835351 waagent[2325]: 2025-12-16T12:15:13.835329Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:15:13.835483 waagent[2325]: 2025-12-16T12:15:13.835425Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:15:13.835626 waagent[2325]: 2025-12-16T12:15:13.835588Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:15:13.835655 waagent[2325]: 2025-12-16T12:15:13.835629Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:15:13.835827 waagent[2325]: 2025-12-16T12:15:13.835738Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:15:13.836125 waagent[2325]: 2025-12-16T12:15:13.836095Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:15:13.836215 waagent[2325]: 2025-12-16T12:15:13.836181Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:15:13.836395 waagent[2325]: 2025-12-16T12:15:13.836372Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:15:13.843475 waagent[2325]: 2025-12-16T12:15:13.842319Z INFO ExtHandler ExtHandler Dec 16 12:15:13.843475 waagent[2325]: 2025-12-16T12:15:13.842374Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8354e896-940e-4635-a6bd-ec8138afb6a6 correlation 06939576-9f35-452f-9ef9-c52b244441a3 created: 2025-12-16T12:14:20.666405Z] Dec 16 12:15:13.843475 waagent[2325]: 2025-12-16T12:15:13.842640Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:15:13.843475 waagent[2325]: 2025-12-16T12:15:13.843013Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 12:15:13.862272 waagent[2325]: 2025-12-16T12:15:13.862242Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:15:13.862272 waagent[2325]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:15:13.862696 waagent[2325]: 2025-12-16T12:15:13.862669Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 935BEB25-F169-469E-BD57-17FC5CF77EE5;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:15:13.882611 waagent[2325]: 2025-12-16T12:15:13.882567Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:15:13.882611 waagent[2325]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:15:13.882611 waagent[2325]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:15:13.882611 waagent[2325]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:be:ab:79 brd ff:ff:ff:ff:ff:ff\ altname enx002248beab79 Dec 16 12:15:13.882611 waagent[2325]: 3: enP22337s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:be:ab:79 brd ff:ff:ff:ff:ff:ff\ altname enP22337p0s2 Dec 16 12:15:13.882611 waagent[2325]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:15:13.882611 waagent[2325]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:15:13.882611 waagent[2325]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:15:13.882611 waagent[2325]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:15:13.882611 waagent[2325]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:15:13.882611 waagent[2325]: 2: eth0 inet6 fe80::222:48ff:febe:ab79/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:15:13.934391 waagent[2325]: 2025-12-16T12:15:13.933772Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:15:13.934391 waagent[2325]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:15:13.934391 waagent[2325]: pkts bytes target prot opt in out source destination Dec 16 12:15:13.934391 waagent[2325]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:15:13.934391 waagent[2325]: pkts bytes target prot opt in out source destination Dec 16 12:15:13.934391 waagent[2325]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:15:13.934391 waagent[2325]: pkts bytes target prot opt in out source destination Dec 16 12:15:13.934391 waagent[2325]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:15:13.934391 waagent[2325]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:15:13.934391 waagent[2325]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:15:13.935968 waagent[2325]: 2025-12-16T12:15:13.935938Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:15:13.935968 waagent[2325]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:15:13.935968 waagent[2325]: pkts bytes target prot opt in out source destination Dec 16 12:15:13.935968 waagent[2325]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:15:13.935968 waagent[2325]: pkts bytes target prot opt in out source destination Dec 16 12:15:13.935968 waagent[2325]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:15:13.935968 waagent[2325]: pkts bytes target prot opt in out source destination Dec 16 12:15:13.935968 waagent[2325]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:15:13.935968 waagent[2325]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:15:13.935968 waagent[2325]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:15:13.936337 waagent[2325]: 2025-12-16T12:15:13.936315Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:15:20.494636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:15:20.495819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:20.590026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:20.596667 (kubelet)[2477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:15:20.730683 kubelet[2477]: E1216 12:15:20.730618 2477 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:15:20.733324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:15:20.733547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:15:20.734146 systemd[1]: kubelet.service: Consumed 108ms CPU time, 106.1M memory peak. Dec 16 12:15:30.743851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:15:30.745570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:30.841474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:30.850618 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:15:30.980297 kubelet[2491]: E1216 12:15:30.980235 2491 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:15:30.982340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:15:30.982550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:15:30.983110 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.7M memory peak. Dec 16 12:15:32.735857 chronyd[2034]: Selected source PHC0 Dec 16 12:15:36.799520 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:15:36.800991 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:41324.service - OpenSSH per-connection server daemon (10.200.16.10:41324). Dec 16 12:15:37.366602 sshd[2499]: Accepted publickey for core from 10.200.16.10 port 41324 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:15:37.367708 sshd-session[2499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:37.371280 systemd-logind[2058]: New session 4 of user core. Dec 16 12:15:37.382583 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:15:37.668263 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:41336.service - OpenSSH per-connection server daemon (10.200.16.10:41336). Dec 16 12:15:38.085470 sshd[2506]: Accepted publickey for core from 10.200.16.10 port 41336 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:15:38.086514 sshd-session[2506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:38.090291 systemd-logind[2058]: New session 5 of user core. Dec 16 12:15:38.098572 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:15:38.318516 sshd[2510]: Connection closed by 10.200.16.10 port 41336 Dec 16 12:15:38.318130 sshd-session[2506]: pam_unix(sshd:session): session closed for user core Dec 16 12:15:38.320932 systemd-logind[2058]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:15:38.321049 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:41336.service: Deactivated successfully. Dec 16 12:15:38.322282 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:15:38.324030 systemd-logind[2058]: Removed session 5. Dec 16 12:15:38.409892 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:41348.service - OpenSSH per-connection server daemon (10.200.16.10:41348). Dec 16 12:15:38.839213 sshd[2516]: Accepted publickey for core from 10.200.16.10 port 41348 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:15:38.840219 sshd-session[2516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:38.843815 systemd-logind[2058]: New session 6 of user core. Dec 16 12:15:38.849558 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:15:39.071307 sshd[2520]: Connection closed by 10.200.16.10 port 41348 Dec 16 12:15:39.071733 sshd-session[2516]: pam_unix(sshd:session): session closed for user core Dec 16 12:15:39.075231 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:41348.service: Deactivated successfully. Dec 16 12:15:39.076544 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:15:39.077163 systemd-logind[2058]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:15:39.079169 systemd-logind[2058]: Removed session 6. Dec 16 12:15:39.158245 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:41364.service - OpenSSH per-connection server daemon (10.200.16.10:41364). Dec 16 12:15:39.577747 sshd[2526]: Accepted publickey for core from 10.200.16.10 port 41364 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:15:39.578793 sshd-session[2526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:39.582646 systemd-logind[2058]: New session 7 of user core. Dec 16 12:15:39.592587 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:15:39.810860 sshd[2530]: Connection closed by 10.200.16.10 port 41364 Dec 16 12:15:39.811319 sshd-session[2526]: pam_unix(sshd:session): session closed for user core Dec 16 12:15:39.814084 systemd-logind[2058]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:15:39.814212 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:41364.service: Deactivated successfully. Dec 16 12:15:39.815772 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:15:39.817368 systemd-logind[2058]: Removed session 7. Dec 16 12:15:39.897591 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:41376.service - OpenSSH per-connection server daemon (10.200.16.10:41376). Dec 16 12:15:40.317306 sshd[2536]: Accepted publickey for core from 10.200.16.10 port 41376 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:15:40.318306 sshd-session[2536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:40.322307 systemd-logind[2058]: New session 8 of user core. Dec 16 12:15:40.329579 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:15:40.619274 sudo[2541]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:15:40.619566 sudo[2541]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:15:40.647689 sudo[2541]: pam_unix(sudo:session): session closed for user root Dec 16 12:15:40.724400 sshd[2540]: Connection closed by 10.200.16.10 port 41376 Dec 16 12:15:40.724904 sshd-session[2536]: pam_unix(sshd:session): session closed for user core Dec 16 12:15:40.728181 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:41376.service: Deactivated successfully. Dec 16 12:15:40.729830 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:15:40.730992 systemd-logind[2058]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:15:40.731938 systemd-logind[2058]: Removed session 8. Dec 16 12:15:40.812662 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:40824.service - OpenSSH per-connection server daemon (10.200.16.10:40824). Dec 16 12:15:40.993363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:15:40.994841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:41.087829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:41.096610 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:15:41.236476 kubelet[2559]: E1216 12:15:41.235987 2559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:15:41.238035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:15:41.573203 sshd[2548]: Accepted publickey for core from 10.200.16.10 port 40824 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:15:41.240691 sshd-session[2548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:41.238137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:15:41.398245 sudo[2569]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:15:41.238725 systemd[1]: kubelet.service: Consumed 213ms CPU time, 107.3M memory peak. Dec 16 12:15:41.398436 sudo[2569]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:15:41.244386 systemd-logind[2058]: New session 9 of user core. Dec 16 12:15:41.262564 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:15:41.575367 sudo[2569]: pam_unix(sudo:session): session closed for user root Dec 16 12:15:41.581005 sudo[2568]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:15:41.581195 sudo[2568]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:15:41.586528 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:15:41.611000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:15:41.613575 augenrules[2593]: No rules Dec 16 12:15:41.615722 kernel: kauditd_printk_skb: 159 callbacks suppressed Dec 16 12:15:41.615762 kernel: audit: type=1305 audit(1765887341.611:260): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:15:41.623849 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:15:41.624028 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:15:41.611000 audit[2593]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe84065e0 a2=420 a3=0 items=0 ppid=2574 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:41.640380 kernel: audit: type=1300 audit(1765887341.611:260): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe84065e0 a2=420 a3=0 items=0 ppid=2574 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:41.640630 sudo[2568]: pam_unix(sudo:session): session closed for user root Dec 16 12:15:41.611000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:15:41.648395 kernel: audit: type=1327 audit(1765887341.611:260): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:15:41.648458 kernel: audit: type=1130 audit(1765887341.624:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.671110 kernel: audit: type=1131 audit(1765887341.624:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.639000 audit[2568]: USER_END pid=2568 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.683867 kernel: audit: type=1106 audit(1765887341.639:263): pid=2568 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.639000 audit[2568]: CRED_DISP pid=2568 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.695269 kernel: audit: type=1104 audit(1765887341.639:264): pid=2568 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.718236 sshd[2567]: Connection closed by 10.200.16.10 port 40824 Dec 16 12:15:41.718155 sshd-session[2548]: pam_unix(sshd:session): session closed for user core Dec 16 12:15:41.717000 audit[2548]: USER_END pid=2548 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:41.717000 audit[2548]: CRED_DISP pid=2548 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:41.739774 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:40824.service: Deactivated successfully. Dec 16 12:15:41.741385 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:15:41.751342 kernel: audit: type=1106 audit(1765887341.717:265): pid=2548 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:41.751394 kernel: audit: type=1104 audit(1765887341.717:266): pid=2548 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:41.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.11:22-10.200.16.10:40824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.752240 systemd-logind[2058]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:15:41.753166 systemd-logind[2058]: Removed session 9. Dec 16 12:15:41.764238 kernel: audit: type=1131 audit(1765887341.737:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.11:22-10.200.16.10:40824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.11:22-10.200.16.10:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:41.798078 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:40826.service - OpenSSH per-connection server daemon (10.200.16.10:40826). Dec 16 12:15:42.185000 audit[2602]: USER_ACCT pid=2602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:42.187284 sshd[2602]: Accepted publickey for core from 10.200.16.10 port 40826 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:15:42.186000 audit[2602]: CRED_ACQ pid=2602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:42.186000 audit[2602]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0b7a6e0 a2=3 a3=0 items=0 ppid=1 pid=2602 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:42.186000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:15:42.188503 sshd-session[2602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:15:42.192089 systemd-logind[2058]: New session 10 of user core. Dec 16 12:15:42.202743 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:15:42.203000 audit[2602]: USER_START pid=2602 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:42.204000 audit[2606]: CRED_ACQ pid=2606 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:15:42.330000 audit[2607]: USER_ACCT pid=2607 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:42.330000 audit[2607]: CRED_REFR pid=2607 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:42.330000 audit[2607]: USER_START pid=2607 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:15:42.332167 sudo[2607]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:15:42.332364 sudo[2607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:15:43.452648 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:15:43.462780 (dockerd)[2627]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:15:44.404303 dockerd[2627]: time="2025-12-16T12:15:44.404247310Z" level=info msg="Starting up" Dec 16 12:15:44.405181 dockerd[2627]: time="2025-12-16T12:15:44.405151699Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:15:44.412984 dockerd[2627]: time="2025-12-16T12:15:44.412955863Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:15:44.477750 systemd[1]: var-lib-docker-metacopy\x2dcheck923030955-merged.mount: Deactivated successfully. Dec 16 12:15:44.488942 dockerd[2627]: time="2025-12-16T12:15:44.488908198Z" level=info msg="Loading containers: start." Dec 16 12:15:44.515517 kernel: Initializing XFRM netlink socket Dec 16 12:15:44.574000 audit[2674]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2674 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.574000 audit[2674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc8e557d0 a2=0 a3=0 items=0 ppid=2627 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:15:44.576000 audit[2676]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.576000 audit[2676]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe8b33010 a2=0 a3=0 items=0 ppid=2627 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.576000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:15:44.578000 audit[2678]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.578000 audit[2678]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0652e00 a2=0 a3=0 items=0 ppid=2627 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.578000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:15:44.579000 audit[2680]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.579000 audit[2680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe102b6a0 a2=0 a3=0 items=0 ppid=2627 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.579000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:15:44.581000 audit[2682]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.581000 audit[2682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1b01c20 a2=0 a3=0 items=0 ppid=2627 pid=2682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:15:44.582000 audit[2684]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.582000 audit[2684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc282b470 a2=0 a3=0 items=0 ppid=2627 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.582000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:15:44.584000 audit[2686]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.584000 audit[2686]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc74b70c0 a2=0 a3=0 items=0 ppid=2627 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:15:44.586000 audit[2688]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.586000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffeb530ca0 a2=0 a3=0 items=0 ppid=2627 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:15:44.628000 audit[2691]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.628000 audit[2691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=fffff28e30f0 a2=0 a3=0 items=0 ppid=2627 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.628000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:15:44.630000 audit[2693]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.630000 audit[2693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffccb45a40 a2=0 a3=0 items=0 ppid=2627 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.630000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:15:44.631000 audit[2695]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.631000 audit[2695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffffa70ba90 a2=0 a3=0 items=0 ppid=2627 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:15:44.633000 audit[2697]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.633000 audit[2697]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff9f990b0 a2=0 a3=0 items=0 ppid=2627 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:15:44.634000 audit[2699]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.634000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcb096240 a2=0 a3=0 items=0 ppid=2627 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.634000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:15:44.693000 audit[2729]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2729 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.693000 audit[2729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe6fb4ce0 a2=0 a3=0 items=0 ppid=2627 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:15:44.694000 audit[2731]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2731 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.694000 audit[2731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd01ff9b0 a2=0 a3=0 items=0 ppid=2627 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:15:44.696000 audit[2733]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.696000 audit[2733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd22ef5e0 a2=0 a3=0 items=0 ppid=2627 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:15:44.697000 audit[2735]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2735 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.697000 audit[2735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd98e2180 a2=0 a3=0 items=0 ppid=2627 pid=2735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.697000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:15:44.698000 audit[2737]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2737 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.698000 audit[2737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff2e4b310 a2=0 a3=0 items=0 ppid=2627 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:15:44.700000 audit[2739]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2739 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.700000 audit[2739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc7fcf310 a2=0 a3=0 items=0 ppid=2627 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.700000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:15:44.701000 audit[2741]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2741 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.701000 audit[2741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff85f89c0 a2=0 a3=0 items=0 ppid=2627 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.701000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:15:44.703000 audit[2743]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2743 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.703000 audit[2743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffe8e5460 a2=0 a3=0 items=0 ppid=2627 pid=2743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:15:44.704000 audit[2745]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.704000 audit[2745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffce22b7c0 a2=0 a3=0 items=0 ppid=2627 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:15:44.706000 audit[2747]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2747 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.706000 audit[2747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd9a4c870 a2=0 a3=0 items=0 ppid=2627 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.706000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:15:44.707000 audit[2749]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2749 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.707000 audit[2749]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdda2b2e0 a2=0 a3=0 items=0 ppid=2627 pid=2749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:15:44.709000 audit[2751]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.709000 audit[2751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffdd6e8df0 a2=0 a3=0 items=0 ppid=2627 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:15:44.710000 audit[2753]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2753 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.710000 audit[2753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffec3f8250 a2=0 a3=0 items=0 ppid=2627 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:15:44.714000 audit[2758]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2758 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.714000 audit[2758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe9f3c440 a2=0 a3=0 items=0 ppid=2627 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:15:44.715000 audit[2760]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2760 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.715000 audit[2760]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd779b140 a2=0 a3=0 items=0 ppid=2627 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:15:44.717000 audit[2762]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.717000 audit[2762]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdfb90530 a2=0 a3=0 items=0 ppid=2627 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:15:44.718000 audit[2764]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2764 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.718000 audit[2764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe380c580 a2=0 a3=0 items=0 ppid=2627 pid=2764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:15:44.720000 audit[2766]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2766 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.720000 audit[2766]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff7e76910 a2=0 a3=0 items=0 ppid=2627 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:15:44.721000 audit[2768]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2768 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:44.721000 audit[2768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff4b0c160 a2=0 a3=0 items=0 ppid=2627 pid=2768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:15:44.786000 audit[2773]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.786000 audit[2773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffa3a5750 a2=0 a3=0 items=0 ppid=2627 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:15:44.787000 audit[2775]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.787000 audit[2775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe30b27a0 a2=0 a3=0 items=0 ppid=2627 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:15:44.793000 audit[2783]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.793000 audit[2783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffda0bf9e0 a2=0 a3=0 items=0 ppid=2627 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:15:44.797000 audit[2788]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.797000 audit[2788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffeaf8570 a2=0 a3=0 items=0 ppid=2627 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:15:44.798000 audit[2790]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2790 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.798000 audit[2790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffdc1584d0 a2=0 a3=0 items=0 ppid=2627 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.798000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:15:44.800000 audit[2792]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2792 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.800000 audit[2792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffde35c070 a2=0 a3=0 items=0 ppid=2627 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:15:44.801000 audit[2794]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.801000 audit[2794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffe4ad72c0 a2=0 a3=0 items=0 ppid=2627 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:15:44.803000 audit[2796]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:44.803000 audit[2796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffeb7dfcb0 a2=0 a3=0 items=0 ppid=2627 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:44.803000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:15:44.805567 systemd-networkd[1678]: docker0: Link UP Dec 16 12:15:44.816626 dockerd[2627]: time="2025-12-16T12:15:44.816591058Z" level=info msg="Loading containers: done." Dec 16 12:15:44.874620 dockerd[2627]: time="2025-12-16T12:15:44.874581821Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:15:44.874731 dockerd[2627]: time="2025-12-16T12:15:44.874646647Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:15:44.874749 dockerd[2627]: time="2025-12-16T12:15:44.874732713Z" level=info msg="Initializing buildkit" Dec 16 12:15:44.912041 dockerd[2627]: time="2025-12-16T12:15:44.911978130Z" level=info msg="Completed buildkit initialization" Dec 16 12:15:44.917111 dockerd[2627]: time="2025-12-16T12:15:44.917082506Z" level=info msg="Daemon has completed initialization" Dec 16 12:15:44.917501 dockerd[2627]: time="2025-12-16T12:15:44.917165204Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:15:44.917732 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:15:44.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:45.616898 containerd[2086]: time="2025-12-16T12:15:45.616860903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:15:46.406531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509170206.mount: Deactivated successfully. Dec 16 12:15:47.313432 containerd[2086]: time="2025-12-16T12:15:47.313313831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:47.315504 containerd[2086]: time="2025-12-16T12:15:47.315463827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=25746322" Dec 16 12:15:47.322036 containerd[2086]: time="2025-12-16T12:15:47.321989672Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:47.326333 containerd[2086]: time="2025-12-16T12:15:47.326293519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:47.327133 containerd[2086]: time="2025-12-16T12:15:47.326849104Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.709954904s" Dec 16 12:15:47.327133 containerd[2086]: time="2025-12-16T12:15:47.326880633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:15:47.327525 containerd[2086]: time="2025-12-16T12:15:47.327494316Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:15:48.666303 containerd[2086]: time="2025-12-16T12:15:48.666249390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:48.668168 containerd[2086]: time="2025-12-16T12:15:48.668013725Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 16 12:15:48.670284 containerd[2086]: time="2025-12-16T12:15:48.670259124Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:48.674025 containerd[2086]: time="2025-12-16T12:15:48.673995657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:48.674964 containerd[2086]: time="2025-12-16T12:15:48.674940974Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.347269852s" Dec 16 12:15:48.674999 containerd[2086]: time="2025-12-16T12:15:48.674970183Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:15:48.675334 containerd[2086]: time="2025-12-16T12:15:48.675311354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:15:49.524382 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:15:49.761730 containerd[2086]: time="2025-12-16T12:15:49.761680221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:49.763806 containerd[2086]: time="2025-12-16T12:15:49.763641391Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17610300" Dec 16 12:15:49.765938 containerd[2086]: time="2025-12-16T12:15:49.765916295Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:49.770579 containerd[2086]: time="2025-12-16T12:15:49.770554529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:49.771919 containerd[2086]: time="2025-12-16T12:15:49.771610886Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.096274667s" Dec 16 12:15:49.772292 containerd[2086]: time="2025-12-16T12:15:49.771992278Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:15:49.772761 containerd[2086]: time="2025-12-16T12:15:49.772735741Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:15:50.735282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337288050.mount: Deactivated successfully. Dec 16 12:15:51.243515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:15:51.244760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:51.333755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:51.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:51.337350 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:15:51.337407 kernel: audit: type=1130 audit(1765887351.333:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:51.350115 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:15:51.373402 kubelet[2917]: E1216 12:15:51.373296 2917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:15:51.375214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:15:51.375407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:15:51.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:15:51.375916 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.9M memory peak. Dec 16 12:15:51.387484 kernel: audit: type=1131 audit(1765887351.375:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:15:51.917078 containerd[2086]: time="2025-12-16T12:15:51.917026444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:51.919260 containerd[2086]: time="2025-12-16T12:15:51.919128196Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27558078" Dec 16 12:15:51.921254 containerd[2086]: time="2025-12-16T12:15:51.921231220Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:51.924240 containerd[2086]: time="2025-12-16T12:15:51.924204649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:51.924891 containerd[2086]: time="2025-12-16T12:15:51.924440075Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 2.15167502s" Dec 16 12:15:51.924891 containerd[2086]: time="2025-12-16T12:15:51.924477420Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:15:51.925226 containerd[2086]: time="2025-12-16T12:15:51.925195522Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:15:52.570150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1960434016.mount: Deactivated successfully. Dec 16 12:15:53.219270 containerd[2086]: time="2025-12-16T12:15:53.219220288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:53.222320 containerd[2086]: time="2025-12-16T12:15:53.222277208Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16590442" Dec 16 12:15:53.224202 containerd[2086]: time="2025-12-16T12:15:53.224164440Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:53.227471 containerd[2086]: time="2025-12-16T12:15:53.227424840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:53.228164 containerd[2086]: time="2025-12-16T12:15:53.228024842Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.302676552s" Dec 16 12:15:53.228164 containerd[2086]: time="2025-12-16T12:15:53.228051683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:15:53.228584 containerd[2086]: time="2025-12-16T12:15:53.228559768Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:15:53.794465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1149311939.mount: Deactivated successfully. Dec 16 12:15:53.810074 containerd[2086]: time="2025-12-16T12:15:53.809642617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:15:53.813001 containerd[2086]: time="2025-12-16T12:15:53.812964652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:15:53.815484 containerd[2086]: time="2025-12-16T12:15:53.815462589Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:15:53.818944 containerd[2086]: time="2025-12-16T12:15:53.818920174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:15:53.819374 containerd[2086]: time="2025-12-16T12:15:53.819277525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 590.69314ms" Dec 16 12:15:53.819802 containerd[2086]: time="2025-12-16T12:15:53.819784242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:15:53.820270 containerd[2086]: time="2025-12-16T12:15:53.820234821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:15:54.354364 update_engine[2061]: I20251216 12:15:54.354308 2061 update_attempter.cc:509] Updating boot flags... Dec 16 12:15:54.541302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673403345.mount: Deactivated successfully. Dec 16 12:15:56.792177 containerd[2086]: time="2025-12-16T12:15:56.791507841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:56.793596 containerd[2086]: time="2025-12-16T12:15:56.793560179Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Dec 16 12:15:56.795961 containerd[2086]: time="2025-12-16T12:15:56.795941472Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:56.799047 containerd[2086]: time="2025-12-16T12:15:56.799024823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:15:56.799625 containerd[2086]: time="2025-12-16T12:15:56.799601924Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.979264242s" Dec 16 12:15:56.799706 containerd[2086]: time="2025-12-16T12:15:56.799693927Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:15:58.765319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:58.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:58.765437 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106.9M memory peak. Dec 16 12:15:58.769639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:58.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:58.788677 kernel: audit: type=1130 audit(1765887358.764:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:58.788722 kernel: audit: type=1131 audit(1765887358.764:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:58.805021 systemd[1]: Reload requested from client PID 3176 ('systemctl') (unit session-10.scope)... Dec 16 12:15:58.805030 systemd[1]: Reloading... Dec 16 12:15:58.903548 zram_generator::config[3225]: No configuration found. Dec 16 12:15:59.054876 systemd[1]: Reloading finished in 249 ms. Dec 16 12:15:59.082000 audit: BPF prog-id=87 op=LOAD Dec 16 12:15:59.086000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:15:59.091971 kernel: audit: type=1334 audit(1765887359.082:322): prog-id=87 op=LOAD Dec 16 12:15:59.092005 kernel: audit: type=1334 audit(1765887359.086:323): prog-id=70 op=UNLOAD Dec 16 12:15:59.086000 audit: BPF prog-id=88 op=LOAD Dec 16 12:15:59.095932 kernel: audit: type=1334 audit(1765887359.086:324): prog-id=88 op=LOAD Dec 16 12:15:59.086000 audit: BPF prog-id=89 op=LOAD Dec 16 12:15:59.099771 kernel: audit: type=1334 audit(1765887359.086:325): prog-id=89 op=LOAD Dec 16 12:15:59.086000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:15:59.103737 kernel: audit: type=1334 audit(1765887359.086:326): prog-id=71 op=UNLOAD Dec 16 12:15:59.086000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:15:59.107629 kernel: audit: type=1334 audit(1765887359.086:327): prog-id=72 op=UNLOAD Dec 16 12:15:59.087000 audit: BPF prog-id=90 op=LOAD Dec 16 12:15:59.111602 kernel: audit: type=1334 audit(1765887359.087:328): prog-id=90 op=LOAD Dec 16 12:15:59.087000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:15:59.115433 kernel: audit: type=1334 audit(1765887359.087:329): prog-id=73 op=UNLOAD Dec 16 12:15:59.087000 audit: BPF prog-id=91 op=LOAD Dec 16 12:15:59.087000 audit: BPF prog-id=92 op=LOAD Dec 16 12:15:59.087000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:15:59.087000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:15:59.087000 audit: BPF prog-id=93 op=LOAD Dec 16 12:15:59.087000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:15:59.118000 audit: BPF prog-id=94 op=LOAD Dec 16 12:15:59.118000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:15:59.118000 audit: BPF prog-id=95 op=LOAD Dec 16 12:15:59.118000 audit: BPF prog-id=96 op=LOAD Dec 16 12:15:59.118000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:15:59.118000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:15:59.119000 audit: BPF prog-id=97 op=LOAD Dec 16 12:15:59.119000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:15:59.120000 audit: BPF prog-id=98 op=LOAD Dec 16 12:15:59.120000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:15:59.120000 audit: BPF prog-id=99 op=LOAD Dec 16 12:15:59.120000 audit: BPF prog-id=100 op=LOAD Dec 16 12:15:59.120000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:15:59.120000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:15:59.120000 audit: BPF prog-id=101 op=LOAD Dec 16 12:15:59.120000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:15:59.120000 audit: BPF prog-id=102 op=LOAD Dec 16 12:15:59.120000 audit: BPF prog-id=103 op=LOAD Dec 16 12:15:59.120000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:15:59.120000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:15:59.120000 audit: BPF prog-id=104 op=LOAD Dec 16 12:15:59.120000 audit: BPF prog-id=105 op=LOAD Dec 16 12:15:59.120000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:15:59.120000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:15:59.121000 audit: BPF prog-id=106 op=LOAD Dec 16 12:15:59.121000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:15:59.130944 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:15:59.131010 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:15:59.131278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:59.131336 systemd[1]: kubelet.service: Consumed 72ms CPU time, 95.2M memory peak. Dec 16 12:15:59.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:15:59.133700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:15:59.324340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:15:59.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:15:59.329628 (kubelet)[3293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:15:59.352473 kubelet[3293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:15:59.352473 kubelet[3293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:15:59.352473 kubelet[3293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:15:59.352473 kubelet[3293]: I1216 12:15:59.352346 3293 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:15:59.699693 kubelet[3293]: I1216 12:15:59.698744 3293 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:15:59.699693 kubelet[3293]: I1216 12:15:59.698773 3293 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:15:59.699693 kubelet[3293]: I1216 12:15:59.698964 3293 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:15:59.718512 kubelet[3293]: E1216 12:15:59.718480 3293 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:15:59.719845 kubelet[3293]: I1216 12:15:59.719814 3293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:15:59.724211 kubelet[3293]: I1216 12:15:59.724181 3293 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:15:59.726572 kubelet[3293]: I1216 12:15:59.726554 3293 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:15:59.727338 kubelet[3293]: I1216 12:15:59.727309 3293 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:15:59.727559 kubelet[3293]: I1216 12:15:59.727399 3293 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-8648328498","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:15:59.727701 kubelet[3293]: I1216 12:15:59.727689 3293 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:15:59.727746 kubelet[3293]: I1216 12:15:59.727739 3293 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:15:59.727887 kubelet[3293]: I1216 12:15:59.727876 3293 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:15:59.730340 kubelet[3293]: I1216 12:15:59.730326 3293 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:15:59.730421 kubelet[3293]: I1216 12:15:59.730412 3293 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:15:59.730499 kubelet[3293]: I1216 12:15:59.730491 3293 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:15:59.730541 kubelet[3293]: I1216 12:15:59.730533 3293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:15:59.731731 kubelet[3293]: W1216 12:15:59.731697 3293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-8648328498&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 16 12:15:59.731787 kubelet[3293]: E1216 12:15:59.731744 3293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-8648328498&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:15:59.732359 kubelet[3293]: W1216 12:15:59.732333 3293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 16 12:15:59.732578 kubelet[3293]: E1216 12:15:59.732561 3293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:15:59.733468 kubelet[3293]: I1216 12:15:59.732712 3293 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:15:59.733468 kubelet[3293]: I1216 12:15:59.732996 3293 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:15:59.733468 kubelet[3293]: W1216 12:15:59.733035 3293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:15:59.733468 kubelet[3293]: I1216 12:15:59.733410 3293 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:15:59.733468 kubelet[3293]: I1216 12:15:59.733433 3293 server.go:1287] "Started kubelet" Dec 16 12:15:59.737033 kubelet[3293]: E1216 12:15:59.736925 3293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-a-8648328498.1881b132964350f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-8648328498,UID:ci-4547.0.0-a-8648328498,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-8648328498,},FirstTimestamp:2025-12-16 12:15:59.73342028 +0000 UTC m=+0.401632691,LastTimestamp:2025-12-16 12:15:59.73342028 +0000 UTC m=+0.401632691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-8648328498,}" Dec 16 12:15:59.737630 kubelet[3293]: I1216 12:15:59.737613 3293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:15:59.739276 kubelet[3293]: I1216 12:15:59.739174 3293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:15:59.739276 kubelet[3293]: I1216 12:15:59.739270 3293 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:15:59.740366 kubelet[3293]: I1216 12:15:59.740340 3293 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:15:59.740000 audit[3304]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:59.740000 audit[3304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff6540b60 a2=0 a3=0 items=0 ppid=3293 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:15:59.741592 kubelet[3293]: I1216 12:15:59.741547 3293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:15:59.741739 kubelet[3293]: I1216 12:15:59.741722 3293 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:15:59.741000 audit[3305]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:59.741000 audit[3305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9571ac0 a2=0 a3=0 items=0 ppid=3293 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:15:59.743276 kubelet[3293]: E1216 12:15:59.743259 3293 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:15:59.744174 kubelet[3293]: E1216 12:15:59.744152 3293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-8648328498\" not found" Dec 16 12:15:59.744864 kubelet[3293]: I1216 12:15:59.744272 3293 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:15:59.745000 audit[3307]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3307 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:59.745000 audit[3307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd6483d30 a2=0 a3=0 items=0 ppid=3293 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:15:59.745831 kubelet[3293]: I1216 12:15:59.745792 3293 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:15:59.746798 kubelet[3293]: W1216 12:15:59.746663 3293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 16 12:15:59.746857 kubelet[3293]: E1216 12:15:59.746697 3293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:15:59.746889 kubelet[3293]: E1216 12:15:59.746865 3293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-8648328498?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Dec 16 12:15:59.746978 kubelet[3293]: I1216 12:15:59.744283 3293 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:15:59.747011 kubelet[3293]: I1216 12:15:59.747000 3293 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:15:59.747000 audit[3309]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:59.747000 audit[3309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc8b0dbe0 a2=0 a3=0 items=0 ppid=3293 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:15:59.750574 kubelet[3293]: I1216 12:15:59.750554 3293 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:15:59.750574 kubelet[3293]: I1216 12:15:59.750570 3293 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:15:59.763462 kubelet[3293]: I1216 12:15:59.763433 3293 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:15:59.763462 kubelet[3293]: I1216 12:15:59.763466 3293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:15:59.763534 kubelet[3293]: I1216 12:15:59.763479 3293 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:15:59.845918 kubelet[3293]: E1216 12:15:59.845893 3293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-8648328498\" not found" Dec 16 12:15:59.920000 audit[3315]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:59.920000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff8b67fe0 a2=0 a3=0 items=0 ppid=3293 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.920000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:15:59.921646 kubelet[3293]: I1216 12:15:59.921437 3293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:15:59.921646 kubelet[3293]: I1216 12:15:59.921638 3293 policy_none.go:49] "None policy: Start" Dec 16 12:15:59.921687 kubelet[3293]: I1216 12:15:59.921650 3293 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:15:59.921687 kubelet[3293]: I1216 12:15:59.921658 3293 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:15:59.922000 audit[3316]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:59.922000 audit[3316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd5a5efc0 a2=0 a3=0 items=0 ppid=3293 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:15:59.923303 kubelet[3293]: I1216 12:15:59.922817 3293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:15:59.923303 kubelet[3293]: I1216 12:15:59.922833 3293 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:15:59.923303 kubelet[3293]: I1216 12:15:59.922848 3293 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:15:59.923303 kubelet[3293]: I1216 12:15:59.922852 3293 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:15:59.923303 kubelet[3293]: W1216 12:15:59.923243 3293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 16 12:15:59.923469 kubelet[3293]: E1216 12:15:59.923418 3293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:15:59.923000 audit[3317]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:59.923000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc3e9b70 a2=0 a3=0 items=0 ppid=3293 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:15:59.924000 audit[3321]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:59.924000 audit[3321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6cf2880 a2=0 a3=0 items=0 ppid=3293 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.924000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:15:59.924000 audit[3320]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3320 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:59.924000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb33bda0 a2=0 a3=0 items=0 ppid=3293 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:15:59.925025 kubelet[3293]: E1216 12:15:59.923917 3293 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:15:59.925000 audit[3322]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3322 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:59.925000 audit[3322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0477640 a2=0 a3=0 items=0 ppid=3293 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:15:59.925000 audit[3323]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:15:59.925000 audit[3323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdafd4030 a2=0 a3=0 items=0 ppid=3293 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:15:59.926000 audit[3324]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:15:59.926000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff4acb90 a2=0 a3=0 items=0 ppid=3293 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:15:59.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:15:59.930736 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:15:59.940389 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:15:59.942969 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:15:59.947057 kubelet[3293]: E1216 12:15:59.947033 3293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-8648328498\" not found" Dec 16 12:15:59.947113 kubelet[3293]: E1216 12:15:59.947096 3293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-8648328498?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Dec 16 12:15:59.949073 kubelet[3293]: I1216 12:15:59.949053 3293 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:15:59.949412 kubelet[3293]: I1216 12:15:59.949393 3293 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:15:59.949412 kubelet[3293]: I1216 12:15:59.949404 3293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:15:59.950757 kubelet[3293]: I1216 12:15:59.949593 3293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:15:59.952104 kubelet[3293]: E1216 12:15:59.952062 3293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:15:59.952104 kubelet[3293]: E1216 12:15:59.952090 3293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-a-8648328498\" not found" Dec 16 12:16:00.032496 systemd[1]: Created slice kubepods-burstable-podaf4664b674899d05d82e8ad8c708bacf.slice - libcontainer container kubepods-burstable-podaf4664b674899d05d82e8ad8c708bacf.slice. Dec 16 12:16:00.039970 kubelet[3293]: E1216 12:16:00.039889 3293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-8648328498\" not found" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.042566 systemd[1]: Created slice kubepods-burstable-podf90407fe3a90be82de16a89d30a0b0d0.slice - libcontainer container kubepods-burstable-podf90407fe3a90be82de16a89d30a0b0d0.slice. Dec 16 12:16:00.045469 kubelet[3293]: E1216 12:16:00.045066 3293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-8648328498\" not found" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.046898 systemd[1]: Created slice kubepods-burstable-pod6b345a638eb80cbcc7f46328d78ff41a.slice - libcontainer container kubepods-burstable-pod6b345a638eb80cbcc7f46328d78ff41a.slice. Dec 16 12:16:00.048195 kubelet[3293]: E1216 12:16:00.048083 3293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-8648328498\" not found" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048325 kubelet[3293]: I1216 12:16:00.048303 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af4664b674899d05d82e8ad8c708bacf-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-8648328498\" (UID: \"af4664b674899d05d82e8ad8c708bacf\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048432 kubelet[3293]: I1216 12:16:00.048420 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af4664b674899d05d82e8ad8c708bacf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-8648328498\" (UID: \"af4664b674899d05d82e8ad8c708bacf\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048605 kubelet[3293]: I1216 12:16:00.048554 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048605 kubelet[3293]: I1216 12:16:00.048572 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048605 kubelet[3293]: I1216 12:16:00.048584 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048763 kubelet[3293]: I1216 12:16:00.048595 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b345a638eb80cbcc7f46328d78ff41a-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-8648328498\" (UID: \"6b345a638eb80cbcc7f46328d78ff41a\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048763 kubelet[3293]: I1216 12:16:00.048726 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af4664b674899d05d82e8ad8c708bacf-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-8648328498\" (UID: \"af4664b674899d05d82e8ad8c708bacf\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048763 kubelet[3293]: I1216 12:16:00.048738 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.048763 kubelet[3293]: I1216 12:16:00.048747 3293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:00.051196 kubelet[3293]: I1216 12:16:00.051178 3293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.051570 kubelet[3293]: E1216 12:16:00.051550 3293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.253351 kubelet[3293]: I1216 12:16:00.253040 3293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.253351 kubelet[3293]: E1216 12:16:00.253306 3293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.341501 containerd[2086]: time="2025-12-16T12:16:00.341440957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-8648328498,Uid:af4664b674899d05d82e8ad8c708bacf,Namespace:kube-system,Attempt:0,}" Dec 16 12:16:00.345853 containerd[2086]: time="2025-12-16T12:16:00.345783961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-8648328498,Uid:f90407fe3a90be82de16a89d30a0b0d0,Namespace:kube-system,Attempt:0,}" Dec 16 12:16:00.348573 kubelet[3293]: E1216 12:16:00.348542 3293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-8648328498?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Dec 16 12:16:00.348643 containerd[2086]: time="2025-12-16T12:16:00.348569229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-8648328498,Uid:6b345a638eb80cbcc7f46328d78ff41a,Namespace:kube-system,Attempt:0,}" Dec 16 12:16:00.407287 containerd[2086]: time="2025-12-16T12:16:00.407235494Z" level=info msg="connecting to shim e1bbb12ac2d0fe24a64b0d9a44113a42b4ed969c884fdcb318d1430e187b72c2" address="unix:///run/containerd/s/358f43315024c385f755386a1720d8b5b86a5d803317562513672956a09d06e4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:00.427691 systemd[1]: Started cri-containerd-e1bbb12ac2d0fe24a64b0d9a44113a42b4ed969c884fdcb318d1430e187b72c2.scope - libcontainer container e1bbb12ac2d0fe24a64b0d9a44113a42b4ed969c884fdcb318d1430e187b72c2. Dec 16 12:16:00.441597 containerd[2086]: time="2025-12-16T12:16:00.441560671Z" level=info msg="connecting to shim dc77b249ef14cdd1aad5eafea3c6056ba737fb56b43c40fb8aa2bbe662488ae9" address="unix:///run/containerd/s/8c8c67dba772ca9141ab471b24228cce207253efaaf0de4d2f29a48aa1d6316a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:00.441922 containerd[2086]: time="2025-12-16T12:16:00.441901435Z" level=info msg="connecting to shim 862079ef9b54ac238f6b412b6f6a0df7570eb33facdd24063ec0ba5305db8d22" address="unix:///run/containerd/s/96e29a1308655e919a9ac99027a7606ba87f3f807af7bfc005fcc1a2246fd6e2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:00.442000 audit: BPF prog-id=107 op=LOAD Dec 16 12:16:00.443000 audit: BPF prog-id=108 op=LOAD Dec 16 12:16:00.443000 audit[3344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3333 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531626262313261633264306665323461363462306439613434313133 Dec 16 12:16:00.445000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:16:00.445000 audit[3344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531626262313261633264306665323461363462306439613434313133 Dec 16 12:16:00.445000 audit: BPF prog-id=109 op=LOAD Dec 16 12:16:00.445000 audit[3344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3333 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531626262313261633264306665323461363462306439613434313133 Dec 16 12:16:00.445000 audit: BPF prog-id=110 op=LOAD Dec 16 12:16:00.445000 audit[3344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3333 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531626262313261633264306665323461363462306439613434313133 Dec 16 12:16:00.446000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:16:00.446000 audit[3344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531626262313261633264306665323461363462306439613434313133 Dec 16 12:16:00.446000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:16:00.446000 audit[3344]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531626262313261633264306665323461363462306439613434313133 Dec 16 12:16:00.446000 audit: BPF prog-id=111 op=LOAD Dec 16 12:16:00.446000 audit[3344]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3333 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531626262313261633264306665323461363462306439613434313133 Dec 16 12:16:00.465581 systemd[1]: Started cri-containerd-862079ef9b54ac238f6b412b6f6a0df7570eb33facdd24063ec0ba5305db8d22.scope - libcontainer container 862079ef9b54ac238f6b412b6f6a0df7570eb33facdd24063ec0ba5305db8d22. Dec 16 12:16:00.468977 systemd[1]: Started cri-containerd-dc77b249ef14cdd1aad5eafea3c6056ba737fb56b43c40fb8aa2bbe662488ae9.scope - libcontainer container dc77b249ef14cdd1aad5eafea3c6056ba737fb56b43c40fb8aa2bbe662488ae9. Dec 16 12:16:00.485000 audit: BPF prog-id=112 op=LOAD Dec 16 12:16:00.485000 audit: BPF prog-id=113 op=LOAD Dec 16 12:16:00.485000 audit[3406]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3380 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463373762323439656631346364643161616435656166656133633630 Dec 16 12:16:00.485000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:16:00.485000 audit[3406]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463373762323439656631346364643161616435656166656133633630 Dec 16 12:16:00.486000 audit: BPF prog-id=114 op=LOAD Dec 16 12:16:00.486000 audit[3406]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3380 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463373762323439656631346364643161616435656166656133633630 Dec 16 12:16:00.486000 audit: BPF prog-id=115 op=LOAD Dec 16 12:16:00.486000 audit[3406]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3380 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463373762323439656631346364643161616435656166656133633630 Dec 16 12:16:00.486000 audit: BPF prog-id=115 op=UNLOAD Dec 16 12:16:00.486000 audit[3406]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463373762323439656631346364643161616435656166656133633630 Dec 16 12:16:00.486000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:16:00.486000 audit[3406]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463373762323439656631346364643161616435656166656133633630 Dec 16 12:16:00.486000 audit: BPF prog-id=116 op=LOAD Dec 16 12:16:00.486000 audit[3406]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3380 pid=3406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463373762323439656631346364643161616435656166656133633630 Dec 16 12:16:00.487000 audit: BPF prog-id=117 op=LOAD Dec 16 12:16:00.487000 audit: BPF prog-id=118 op=LOAD Dec 16 12:16:00.487000 audit[3404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3371 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836323037396566396235346163323338663662343132623666366130 Dec 16 12:16:00.488000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:16:00.488000 audit[3404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836323037396566396235346163323338663662343132623666366130 Dec 16 12:16:00.488000 audit: BPF prog-id=119 op=LOAD Dec 16 12:16:00.488000 audit[3404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3371 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836323037396566396235346163323338663662343132623666366130 Dec 16 12:16:00.488000 audit: BPF prog-id=120 op=LOAD Dec 16 12:16:00.488000 audit[3404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3371 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836323037396566396235346163323338663662343132623666366130 Dec 16 12:16:00.488000 audit: BPF prog-id=120 op=UNLOAD Dec 16 12:16:00.488000 audit[3404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836323037396566396235346163323338663662343132623666366130 Dec 16 12:16:00.488000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:16:00.488000 audit[3404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836323037396566396235346163323338663662343132623666366130 Dec 16 12:16:00.488000 audit: BPF prog-id=121 op=LOAD Dec 16 12:16:00.488000 audit[3404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3371 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.488000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836323037396566396235346163323338663662343132623666366130 Dec 16 12:16:00.491967 containerd[2086]: time="2025-12-16T12:16:00.491709863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-8648328498,Uid:af4664b674899d05d82e8ad8c708bacf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1bbb12ac2d0fe24a64b0d9a44113a42b4ed969c884fdcb318d1430e187b72c2\"" Dec 16 12:16:00.497921 containerd[2086]: time="2025-12-16T12:16:00.497816082Z" level=info msg="CreateContainer within sandbox \"e1bbb12ac2d0fe24a64b0d9a44113a42b4ed969c884fdcb318d1430e187b72c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:16:00.519625 containerd[2086]: time="2025-12-16T12:16:00.519544958Z" level=info msg="Container 66e152bd9d1bafd4902cb4e44d3e7a378e2bfd7240efceb53c5e9fce021c8900: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:00.522612 containerd[2086]: time="2025-12-16T12:16:00.522577187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-8648328498,Uid:f90407fe3a90be82de16a89d30a0b0d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc77b249ef14cdd1aad5eafea3c6056ba737fb56b43c40fb8aa2bbe662488ae9\"" Dec 16 12:16:00.525169 containerd[2086]: time="2025-12-16T12:16:00.525136975Z" level=info msg="CreateContainer within sandbox \"dc77b249ef14cdd1aad5eafea3c6056ba737fb56b43c40fb8aa2bbe662488ae9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:16:00.526716 containerd[2086]: time="2025-12-16T12:16:00.526689519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-8648328498,Uid:6b345a638eb80cbcc7f46328d78ff41a,Namespace:kube-system,Attempt:0,} returns sandbox id \"862079ef9b54ac238f6b412b6f6a0df7570eb33facdd24063ec0ba5305db8d22\"" Dec 16 12:16:00.528393 containerd[2086]: time="2025-12-16T12:16:00.528345610Z" level=info msg="CreateContainer within sandbox \"862079ef9b54ac238f6b412b6f6a0df7570eb33facdd24063ec0ba5305db8d22\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:16:00.539262 containerd[2086]: time="2025-12-16T12:16:00.539156454Z" level=info msg="CreateContainer within sandbox \"e1bbb12ac2d0fe24a64b0d9a44113a42b4ed969c884fdcb318d1430e187b72c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"66e152bd9d1bafd4902cb4e44d3e7a378e2bfd7240efceb53c5e9fce021c8900\"" Dec 16 12:16:00.539656 containerd[2086]: time="2025-12-16T12:16:00.539621575Z" level=info msg="StartContainer for \"66e152bd9d1bafd4902cb4e44d3e7a378e2bfd7240efceb53c5e9fce021c8900\"" Dec 16 12:16:00.540556 containerd[2086]: time="2025-12-16T12:16:00.540534448Z" level=info msg="connecting to shim 66e152bd9d1bafd4902cb4e44d3e7a378e2bfd7240efceb53c5e9fce021c8900" address="unix:///run/containerd/s/358f43315024c385f755386a1720d8b5b86a5d803317562513672956a09d06e4" protocol=ttrpc version=3 Dec 16 12:16:00.559579 systemd[1]: Started cri-containerd-66e152bd9d1bafd4902cb4e44d3e7a378e2bfd7240efceb53c5e9fce021c8900.scope - libcontainer container 66e152bd9d1bafd4902cb4e44d3e7a378e2bfd7240efceb53c5e9fce021c8900. Dec 16 12:16:00.567000 audit: BPF prog-id=122 op=LOAD Dec 16 12:16:00.567000 audit: BPF prog-id=123 op=LOAD Dec 16 12:16:00.567000 audit[3468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3333 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636653135326264396431626166643439303263623465343464336537 Dec 16 12:16:00.568000 audit: BPF prog-id=123 op=UNLOAD Dec 16 12:16:00.568000 audit[3468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636653135326264396431626166643439303263623465343464336537 Dec 16 12:16:00.568000 audit: BPF prog-id=124 op=LOAD Dec 16 12:16:00.568000 audit[3468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3333 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636653135326264396431626166643439303263623465343464336537 Dec 16 12:16:00.568000 audit: BPF prog-id=125 op=LOAD Dec 16 12:16:00.568000 audit[3468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3333 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636653135326264396431626166643439303263623465343464336537 Dec 16 12:16:00.568000 audit: BPF prog-id=125 op=UNLOAD Dec 16 12:16:00.568000 audit[3468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636653135326264396431626166643439303263623465343464336537 Dec 16 12:16:00.568000 audit: BPF prog-id=124 op=UNLOAD Dec 16 12:16:00.568000 audit[3468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3333 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636653135326264396431626166643439303263623465343464336537 Dec 16 12:16:00.568000 audit: BPF prog-id=126 op=LOAD Dec 16 12:16:00.568000 audit[3468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3333 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636653135326264396431626166643439303263623465343464336537 Dec 16 12:16:00.573420 containerd[2086]: time="2025-12-16T12:16:00.573396004Z" level=info msg="Container 2fba70c104c4198fade83b6ab158df87704acaedf20fbc286f6121e81e5da101: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:00.596696 containerd[2086]: time="2025-12-16T12:16:00.596662943Z" level=info msg="Container 0098c3be10667987c30b97fdc39f89d3170704edfc1d8506d1cd774cfb07ff1c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:00.598970 containerd[2086]: time="2025-12-16T12:16:00.598888887Z" level=info msg="StartContainer for \"66e152bd9d1bafd4902cb4e44d3e7a378e2bfd7240efceb53c5e9fce021c8900\" returns successfully" Dec 16 12:16:00.611571 containerd[2086]: time="2025-12-16T12:16:00.611496571Z" level=info msg="CreateContainer within sandbox \"862079ef9b54ac238f6b412b6f6a0df7570eb33facdd24063ec0ba5305db8d22\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2fba70c104c4198fade83b6ab158df87704acaedf20fbc286f6121e81e5da101\"" Dec 16 12:16:00.612293 containerd[2086]: time="2025-12-16T12:16:00.611892666Z" level=info msg="StartContainer for \"2fba70c104c4198fade83b6ab158df87704acaedf20fbc286f6121e81e5da101\"" Dec 16 12:16:00.613815 containerd[2086]: time="2025-12-16T12:16:00.612861589Z" level=info msg="CreateContainer within sandbox \"dc77b249ef14cdd1aad5eafea3c6056ba737fb56b43c40fb8aa2bbe662488ae9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0098c3be10667987c30b97fdc39f89d3170704edfc1d8506d1cd774cfb07ff1c\"" Dec 16 12:16:00.614310 containerd[2086]: time="2025-12-16T12:16:00.613992981Z" level=info msg="StartContainer for \"0098c3be10667987c30b97fdc39f89d3170704edfc1d8506d1cd774cfb07ff1c\"" Dec 16 12:16:00.614310 containerd[2086]: time="2025-12-16T12:16:00.614035175Z" level=info msg="connecting to shim 2fba70c104c4198fade83b6ab158df87704acaedf20fbc286f6121e81e5da101" address="unix:///run/containerd/s/96e29a1308655e919a9ac99027a7606ba87f3f807af7bfc005fcc1a2246fd6e2" protocol=ttrpc version=3 Dec 16 12:16:00.615940 containerd[2086]: time="2025-12-16T12:16:00.615908098Z" level=info msg="connecting to shim 0098c3be10667987c30b97fdc39f89d3170704edfc1d8506d1cd774cfb07ff1c" address="unix:///run/containerd/s/8c8c67dba772ca9141ab471b24228cce207253efaaf0de4d2f29a48aa1d6316a" protocol=ttrpc version=3 Dec 16 12:16:00.633577 systemd[1]: Started cri-containerd-0098c3be10667987c30b97fdc39f89d3170704edfc1d8506d1cd774cfb07ff1c.scope - libcontainer container 0098c3be10667987c30b97fdc39f89d3170704edfc1d8506d1cd774cfb07ff1c. Dec 16 12:16:00.636379 systemd[1]: Started cri-containerd-2fba70c104c4198fade83b6ab158df87704acaedf20fbc286f6121e81e5da101.scope - libcontainer container 2fba70c104c4198fade83b6ab158df87704acaedf20fbc286f6121e81e5da101. Dec 16 12:16:00.645000 audit: BPF prog-id=127 op=LOAD Dec 16 12:16:00.645000 audit: BPF prog-id=128 op=LOAD Dec 16 12:16:00.645000 audit[3498]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3371 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266626137306331303463343139386661646538336236616231353864 Dec 16 12:16:00.645000 audit: BPF prog-id=128 op=UNLOAD Dec 16 12:16:00.645000 audit[3498]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266626137306331303463343139386661646538336236616231353864 Dec 16 12:16:00.645000 audit: BPF prog-id=129 op=LOAD Dec 16 12:16:00.645000 audit[3498]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3371 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266626137306331303463343139386661646538336236616231353864 Dec 16 12:16:00.645000 audit: BPF prog-id=130 op=LOAD Dec 16 12:16:00.645000 audit[3498]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3371 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.645000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266626137306331303463343139386661646538336236616231353864 Dec 16 12:16:00.646000 audit: BPF prog-id=130 op=UNLOAD Dec 16 12:16:00.646000 audit[3498]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266626137306331303463343139386661646538336236616231353864 Dec 16 12:16:00.646000 audit: BPF prog-id=129 op=UNLOAD Dec 16 12:16:00.646000 audit[3498]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3371 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266626137306331303463343139386661646538336236616231353864 Dec 16 12:16:00.646000 audit: BPF prog-id=131 op=LOAD Dec 16 12:16:00.646000 audit[3498]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3371 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.646000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266626137306331303463343139386661646538336236616231353864 Dec 16 12:16:00.649000 audit: BPF prog-id=132 op=LOAD Dec 16 12:16:00.649000 audit: BPF prog-id=133 op=LOAD Dec 16 12:16:00.649000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3380 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.649000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030393863336265313036363739383763333062393766646333396638 Dec 16 12:16:00.650000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:16:00.650000 audit[3500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030393863336265313036363739383763333062393766646333396638 Dec 16 12:16:00.650000 audit: BPF prog-id=134 op=LOAD Dec 16 12:16:00.650000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3380 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030393863336265313036363739383763333062393766646333396638 Dec 16 12:16:00.651000 audit: BPF prog-id=135 op=LOAD Dec 16 12:16:00.651000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3380 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030393863336265313036363739383763333062393766646333396638 Dec 16 12:16:00.651000 audit: BPF prog-id=135 op=UNLOAD Dec 16 12:16:00.651000 audit[3500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030393863336265313036363739383763333062393766646333396638 Dec 16 12:16:00.651000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:16:00.651000 audit[3500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3380 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030393863336265313036363739383763333062393766646333396638 Dec 16 12:16:00.651000 audit: BPF prog-id=136 op=LOAD Dec 16 12:16:00.651000 audit[3500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3380 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:00.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030393863336265313036363739383763333062393766646333396638 Dec 16 12:16:00.654994 kubelet[3293]: I1216 12:16:00.654949 3293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.692636 containerd[2086]: time="2025-12-16T12:16:00.692606315Z" level=info msg="StartContainer for \"0098c3be10667987c30b97fdc39f89d3170704edfc1d8506d1cd774cfb07ff1c\" returns successfully" Dec 16 12:16:00.693856 containerd[2086]: time="2025-12-16T12:16:00.693820455Z" level=info msg="StartContainer for \"2fba70c104c4198fade83b6ab158df87704acaedf20fbc286f6121e81e5da101\" returns successfully" Dec 16 12:16:00.930965 kubelet[3293]: E1216 12:16:00.930881 3293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-8648328498\" not found" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.932956 kubelet[3293]: E1216 12:16:00.932903 3293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-8648328498\" not found" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:00.937364 kubelet[3293]: E1216 12:16:00.937316 3293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-8648328498\" not found" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:01.622011 kubelet[3293]: E1216 12:16:01.621969 3293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-a-8648328498\" not found" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:01.736939 kubelet[3293]: I1216 12:16:01.736775 3293 apiserver.go:52] "Watching apiserver" Dec 16 12:16:01.747987 kubelet[3293]: I1216 12:16:01.747949 3293 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:16:01.808958 kubelet[3293]: I1216 12:16:01.808912 3293 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:01.845240 kubelet[3293]: I1216 12:16:01.845214 3293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.939363 kubelet[3293]: I1216 12:16:01.938256 3293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.939630 kubelet[3293]: I1216 12:16:01.939612 3293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.964196 kubelet[3293]: E1216 12:16:01.964079 3293 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-8648328498\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.964196 kubelet[3293]: I1216 12:16:01.964097 3293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.964540 kubelet[3293]: E1216 12:16:01.964439 3293 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-8648328498\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.964819 kubelet[3293]: E1216 12:16:01.964803 3293 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-8648328498\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.968108 kubelet[3293]: E1216 12:16:01.968003 3293 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-8648328498\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.968108 kubelet[3293]: I1216 12:16:01.968039 3293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:01.970080 kubelet[3293]: E1216 12:16:01.970057 3293 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-8648328498\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:05.908658 systemd[1]: Reload requested from client PID 3565 ('systemctl') (unit session-10.scope)... Dec 16 12:16:05.908671 systemd[1]: Reloading... Dec 16 12:16:05.984478 zram_generator::config[3611]: No configuration found. Dec 16 12:16:06.093646 kubelet[3293]: I1216 12:16:06.093611 3293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:06.102327 kubelet[3293]: W1216 12:16:06.101763 3293 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:16:06.158986 systemd[1]: Reloading finished in 250 ms. Dec 16 12:16:06.179635 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:16:06.191771 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:16:06.191986 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:16:06.206982 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:16:06.207044 kernel: audit: type=1131 audit(1765887366.190:424): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:06.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:06.192052 systemd[1]: kubelet.service: Consumed 671ms CPU time, 127.5M memory peak. Dec 16 12:16:06.195706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:16:06.206000 audit: BPF prog-id=137 op=LOAD Dec 16 12:16:06.213467 kernel: audit: type=1334 audit(1765887366.206:425): prog-id=137 op=LOAD Dec 16 12:16:06.213000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:16:06.213000 audit: BPF prog-id=138 op=LOAD Dec 16 12:16:06.223494 kernel: audit: type=1334 audit(1765887366.213:426): prog-id=101 op=UNLOAD Dec 16 12:16:06.223563 kernel: audit: type=1334 audit(1765887366.213:427): prog-id=138 op=LOAD Dec 16 12:16:06.213000 audit: BPF prog-id=139 op=LOAD Dec 16 12:16:06.227644 kernel: audit: type=1334 audit(1765887366.213:428): prog-id=139 op=LOAD Dec 16 12:16:06.213000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:16:06.231835 kernel: audit: type=1334 audit(1765887366.213:429): prog-id=102 op=UNLOAD Dec 16 12:16:06.213000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:16:06.235889 kernel: audit: type=1334 audit(1765887366.213:430): prog-id=103 op=UNLOAD Dec 16 12:16:06.213000 audit: BPF prog-id=140 op=LOAD Dec 16 12:16:06.239833 kernel: audit: type=1334 audit(1765887366.213:431): prog-id=140 op=LOAD Dec 16 12:16:06.244099 kernel: audit: type=1334 audit(1765887366.213:432): prog-id=93 op=UNLOAD Dec 16 12:16:06.213000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:16:06.217000 audit: BPF prog-id=141 op=LOAD Dec 16 12:16:06.248576 kernel: audit: type=1334 audit(1765887366.217:433): prog-id=141 op=LOAD Dec 16 12:16:06.217000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:16:06.221000 audit: BPF prog-id=142 op=LOAD Dec 16 12:16:06.221000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:16:06.222000 audit: BPF prog-id=143 op=LOAD Dec 16 12:16:06.222000 audit: BPF prog-id=144 op=LOAD Dec 16 12:16:06.222000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:16:06.222000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:16:06.226000 audit: BPF prog-id=145 op=LOAD Dec 16 12:16:06.226000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:16:06.230000 audit: BPF prog-id=146 op=LOAD Dec 16 12:16:06.230000 audit: BPF prog-id=147 op=LOAD Dec 16 12:16:06.230000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:16:06.230000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:16:06.234000 audit: BPF prog-id=148 op=LOAD Dec 16 12:16:06.234000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:16:06.242000 audit: BPF prog-id=149 op=LOAD Dec 16 12:16:06.242000 audit: BPF prog-id=150 op=LOAD Dec 16 12:16:06.242000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:16:06.242000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:16:06.247000 audit: BPF prog-id=151 op=LOAD Dec 16 12:16:06.247000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:16:06.247000 audit: BPF prog-id=152 op=LOAD Dec 16 12:16:06.247000 audit: BPF prog-id=153 op=LOAD Dec 16 12:16:06.247000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:16:06.247000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:16:06.248000 audit: BPF prog-id=154 op=LOAD Dec 16 12:16:06.248000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:16:06.248000 audit: BPF prog-id=155 op=LOAD Dec 16 12:16:06.248000 audit: BPF prog-id=156 op=LOAD Dec 16 12:16:06.248000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:16:06.248000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:16:06.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:06.901206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:16:06.904347 (kubelet)[3679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:16:06.937066 kubelet[3679]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:16:06.937313 kubelet[3679]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:16:06.937357 kubelet[3679]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:16:06.937651 kubelet[3679]: I1216 12:16:06.937495 3679 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:16:06.943505 kubelet[3679]: I1216 12:16:06.943479 3679 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:16:06.943505 kubelet[3679]: I1216 12:16:06.943500 3679 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:16:06.943964 kubelet[3679]: I1216 12:16:06.943947 3679 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:16:06.944846 kubelet[3679]: I1216 12:16:06.944826 3679 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:16:06.946387 kubelet[3679]: I1216 12:16:06.946357 3679 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:16:06.949591 kubelet[3679]: I1216 12:16:06.949574 3679 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:16:06.951808 kubelet[3679]: I1216 12:16:06.951793 3679 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:16:06.951942 kubelet[3679]: I1216 12:16:06.951921 3679 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:16:06.952055 kubelet[3679]: I1216 12:16:06.951940 3679 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-8648328498","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:16:06.952125 kubelet[3679]: I1216 12:16:06.952059 3679 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:16:06.952125 kubelet[3679]: I1216 12:16:06.952065 3679 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:16:06.952125 kubelet[3679]: I1216 12:16:06.952100 3679 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:16:06.952205 kubelet[3679]: I1216 12:16:06.952194 3679 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:16:06.952205 kubelet[3679]: I1216 12:16:06.952204 3679 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:16:06.952243 kubelet[3679]: I1216 12:16:06.952218 3679 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:16:06.952243 kubelet[3679]: I1216 12:16:06.952226 3679 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:16:06.954462 kubelet[3679]: I1216 12:16:06.954427 3679 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:16:06.955490 kubelet[3679]: I1216 12:16:06.955470 3679 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:16:06.956515 kubelet[3679]: I1216 12:16:06.956499 3679 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:16:06.956586 kubelet[3679]: I1216 12:16:06.956525 3679 server.go:1287] "Started kubelet" Dec 16 12:16:06.961474 kubelet[3679]: I1216 12:16:06.960315 3679 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:16:06.965630 kubelet[3679]: I1216 12:16:06.965605 3679 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:16:06.968376 kubelet[3679]: I1216 12:16:06.960341 3679 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:16:06.970982 kubelet[3679]: I1216 12:16:06.969618 3679 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:16:06.970982 kubelet[3679]: I1216 12:16:06.970140 3679 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:16:06.970982 kubelet[3679]: E1216 12:16:06.970258 3679 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-8648328498\" not found" Dec 16 12:16:06.973480 kubelet[3679]: I1216 12:16:06.973139 3679 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:16:06.973480 kubelet[3679]: I1216 12:16:06.973206 3679 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:16:06.973588 kubelet[3679]: E1216 12:16:06.973552 3679 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:16:06.974319 kubelet[3679]: I1216 12:16:06.960389 3679 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:16:06.974492 kubelet[3679]: I1216 12:16:06.974477 3679 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:16:06.975677 kubelet[3679]: I1216 12:16:06.975655 3679 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:16:06.978248 kubelet[3679]: I1216 12:16:06.978220 3679 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:16:06.978526 kubelet[3679]: I1216 12:16:06.978514 3679 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:16:06.980953 kubelet[3679]: I1216 12:16:06.980934 3679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:16:06.981890 kubelet[3679]: I1216 12:16:06.981875 3679 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:16:06.982157 kubelet[3679]: I1216 12:16:06.981957 3679 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:16:06.982157 kubelet[3679]: I1216 12:16:06.981974 3679 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:16:06.982157 kubelet[3679]: I1216 12:16:06.981979 3679 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:16:06.982157 kubelet[3679]: E1216 12:16:06.982009 3679 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:16:07.011523 kubelet[3679]: I1216 12:16:07.011502 3679 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:16:07.011523 kubelet[3679]: I1216 12:16:07.011517 3679 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:16:07.011603 kubelet[3679]: I1216 12:16:07.011539 3679 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:16:07.011663 kubelet[3679]: I1216 12:16:07.011646 3679 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:16:07.011681 kubelet[3679]: I1216 12:16:07.011658 3679 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:16:07.011681 kubelet[3679]: I1216 12:16:07.011671 3679 policy_none.go:49] "None policy: Start" Dec 16 12:16:07.011681 kubelet[3679]: I1216 12:16:07.011678 3679 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:16:07.011731 kubelet[3679]: I1216 12:16:07.011684 3679 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:16:07.011758 kubelet[3679]: I1216 12:16:07.011747 3679 state_mem.go:75] "Updated machine memory state" Dec 16 12:16:07.015363 kubelet[3679]: I1216 12:16:07.015349 3679 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:16:07.015972 kubelet[3679]: I1216 12:16:07.015799 3679 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:16:07.015972 kubelet[3679]: I1216 12:16:07.015812 3679 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:16:07.015972 kubelet[3679]: I1216 12:16:07.015937 3679 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:16:07.016809 kubelet[3679]: E1216 12:16:07.016787 3679 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:16:07.082887 kubelet[3679]: I1216 12:16:07.082396 3679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.082887 kubelet[3679]: I1216 12:16:07.082726 3679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.082967 kubelet[3679]: I1216 12:16:07.082917 3679 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.092850 kubelet[3679]: W1216 12:16:07.092670 3679 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:16:07.097199 kubelet[3679]: W1216 12:16:07.097179 3679 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:16:07.097941 kubelet[3679]: W1216 12:16:07.097907 3679 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:16:07.097992 kubelet[3679]: E1216 12:16:07.097960 3679 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-8648328498\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.118286 kubelet[3679]: I1216 12:16:07.118271 3679 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:07.128961 kubelet[3679]: I1216 12:16:07.128920 3679 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:07.129121 kubelet[3679]: I1216 12:16:07.128995 3679 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-8648328498" Dec 16 12:16:07.179696 kubelet[3679]: I1216 12:16:07.179291 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af4664b674899d05d82e8ad8c708bacf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-8648328498\" (UID: \"af4664b674899d05d82e8ad8c708bacf\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.179696 kubelet[3679]: I1216 12:16:07.179325 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.179696 kubelet[3679]: I1216 12:16:07.179338 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.179696 kubelet[3679]: I1216 12:16:07.179351 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.179696 kubelet[3679]: I1216 12:16:07.179364 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6b345a638eb80cbcc7f46328d78ff41a-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-8648328498\" (UID: \"6b345a638eb80cbcc7f46328d78ff41a\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.180684 kubelet[3679]: I1216 12:16:07.179375 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af4664b674899d05d82e8ad8c708bacf-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-8648328498\" (UID: \"af4664b674899d05d82e8ad8c708bacf\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.180684 kubelet[3679]: I1216 12:16:07.179386 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af4664b674899d05d82e8ad8c708bacf-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-8648328498\" (UID: \"af4664b674899d05d82e8ad8c708bacf\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.180684 kubelet[3679]: I1216 12:16:07.179395 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.180684 kubelet[3679]: I1216 12:16:07.179406 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f90407fe3a90be82de16a89d30a0b0d0-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-8648328498\" (UID: \"f90407fe3a90be82de16a89d30a0b0d0\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" Dec 16 12:16:07.953237 kubelet[3679]: I1216 12:16:07.953170 3679 apiserver.go:52] "Watching apiserver" Dec 16 12:16:07.979183 kubelet[3679]: I1216 12:16:07.979150 3679 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:16:08.021071 kubelet[3679]: I1216 12:16:08.020799 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-8648328498" podStartSLOduration=1.020790751 podStartE2EDuration="1.020790751s" podCreationTimestamp="2025-12-16 12:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:08.019680246 +0000 UTC m=+1.112605803" watchObservedRunningTime="2025-12-16 12:16:08.020790751 +0000 UTC m=+1.113716404" Dec 16 12:16:08.049270 kubelet[3679]: I1216 12:16:08.049228 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-a-8648328498" podStartSLOduration=2.049218698 podStartE2EDuration="2.049218698s" podCreationTimestamp="2025-12-16 12:16:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:08.037635854 +0000 UTC m=+1.130561443" watchObservedRunningTime="2025-12-16 12:16:08.049218698 +0000 UTC m=+1.142144271" Dec 16 12:16:08.060189 kubelet[3679]: I1216 12:16:08.060158 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-a-8648328498" podStartSLOduration=1.06014779 podStartE2EDuration="1.06014779s" podCreationTimestamp="2025-12-16 12:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:08.049485548 +0000 UTC m=+1.142411097" watchObservedRunningTime="2025-12-16 12:16:08.06014779 +0000 UTC m=+1.153073347" Dec 16 12:16:10.458120 kubelet[3679]: I1216 12:16:10.458008 3679 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:16:10.458684 containerd[2086]: time="2025-12-16T12:16:10.458645520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:16:10.459268 kubelet[3679]: I1216 12:16:10.459036 3679 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:16:11.075628 systemd[1]: Created slice kubepods-besteffort-podb7e6b868_8fca_4abe_b28a_7976cac3fa82.slice - libcontainer container kubepods-besteffort-podb7e6b868_8fca_4abe_b28a_7976cac3fa82.slice. Dec 16 12:16:11.099581 kubelet[3679]: I1216 12:16:11.099551 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7e6b868-8fca-4abe-b28a-7976cac3fa82-xtables-lock\") pod \"kube-proxy-qpvds\" (UID: \"b7e6b868-8fca-4abe-b28a-7976cac3fa82\") " pod="kube-system/kube-proxy-qpvds" Dec 16 12:16:11.099581 kubelet[3679]: I1216 12:16:11.099581 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxt6b\" (UniqueName: \"kubernetes.io/projected/b7e6b868-8fca-4abe-b28a-7976cac3fa82-kube-api-access-mxt6b\") pod \"kube-proxy-qpvds\" (UID: \"b7e6b868-8fca-4abe-b28a-7976cac3fa82\") " pod="kube-system/kube-proxy-qpvds" Dec 16 12:16:11.099709 kubelet[3679]: I1216 12:16:11.099598 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7e6b868-8fca-4abe-b28a-7976cac3fa82-kube-proxy\") pod \"kube-proxy-qpvds\" (UID: \"b7e6b868-8fca-4abe-b28a-7976cac3fa82\") " pod="kube-system/kube-proxy-qpvds" Dec 16 12:16:11.099709 kubelet[3679]: I1216 12:16:11.099609 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7e6b868-8fca-4abe-b28a-7976cac3fa82-lib-modules\") pod \"kube-proxy-qpvds\" (UID: \"b7e6b868-8fca-4abe-b28a-7976cac3fa82\") " pod="kube-system/kube-proxy-qpvds" Dec 16 12:16:11.204454 kubelet[3679]: E1216 12:16:11.204422 3679 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 16 12:16:11.204454 kubelet[3679]: E1216 12:16:11.204454 3679 projected.go:194] Error preparing data for projected volume kube-api-access-mxt6b for pod kube-system/kube-proxy-qpvds: configmap "kube-root-ca.crt" not found Dec 16 12:16:11.204594 kubelet[3679]: E1216 12:16:11.204502 3679 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b7e6b868-8fca-4abe-b28a-7976cac3fa82-kube-api-access-mxt6b podName:b7e6b868-8fca-4abe-b28a-7976cac3fa82 nodeName:}" failed. No retries permitted until 2025-12-16 12:16:11.704487847 +0000 UTC m=+4.797413396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mxt6b" (UniqueName: "kubernetes.io/projected/b7e6b868-8fca-4abe-b28a-7976cac3fa82-kube-api-access-mxt6b") pod "kube-proxy-qpvds" (UID: "b7e6b868-8fca-4abe-b28a-7976cac3fa82") : configmap "kube-root-ca.crt" not found Dec 16 12:16:11.552192 systemd[1]: Created slice kubepods-besteffort-podde88c53a_8ff1_44d5_a733_02cd5e521cdc.slice - libcontainer container kubepods-besteffort-podde88c53a_8ff1_44d5_a733_02cd5e521cdc.slice. Dec 16 12:16:11.601816 kubelet[3679]: I1216 12:16:11.601785 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8527\" (UniqueName: \"kubernetes.io/projected/de88c53a-8ff1-44d5-a733-02cd5e521cdc-kube-api-access-t8527\") pod \"tigera-operator-7dcd859c48-24p78\" (UID: \"de88c53a-8ff1-44d5-a733-02cd5e521cdc\") " pod="tigera-operator/tigera-operator-7dcd859c48-24p78" Dec 16 12:16:11.602068 kubelet[3679]: I1216 12:16:11.601831 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de88c53a-8ff1-44d5-a733-02cd5e521cdc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-24p78\" (UID: \"de88c53a-8ff1-44d5-a733-02cd5e521cdc\") " pod="tigera-operator/tigera-operator-7dcd859c48-24p78" Dec 16 12:16:11.856891 containerd[2086]: time="2025-12-16T12:16:11.856783117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-24p78,Uid:de88c53a-8ff1-44d5-a733-02cd5e521cdc,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:16:11.895634 containerd[2086]: time="2025-12-16T12:16:11.895579630Z" level=info msg="connecting to shim 3a8a6eb03e2032d3b7b71cba83513bde276dc591ef2d0046cb601913dc01a1d2" address="unix:///run/containerd/s/954a22108f42fe448f6335156d8cbf0b108263d2367d1f59c8051c5cd9652435" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:11.915594 systemd[1]: Started cri-containerd-3a8a6eb03e2032d3b7b71cba83513bde276dc591ef2d0046cb601913dc01a1d2.scope - libcontainer container 3a8a6eb03e2032d3b7b71cba83513bde276dc591ef2d0046cb601913dc01a1d2. Dec 16 12:16:11.930022 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:16:11.930114 kernel: audit: type=1334 audit(1765887371.921:466): prog-id=157 op=LOAD Dec 16 12:16:11.921000 audit: BPF prog-id=157 op=LOAD Dec 16 12:16:11.928000 audit: BPF prog-id=158 op=LOAD Dec 16 12:16:11.934060 kernel: audit: type=1334 audit(1765887371.928:467): prog-id=158 op=LOAD Dec 16 12:16:11.928000 audit[3743]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.949639 kernel: audit: type=1300 audit(1765887371.928:467): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:11.965269 kernel: audit: type=1327 audit(1765887371.928:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:11.928000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:16:11.969939 kernel: audit: type=1334 audit(1765887371.928:468): prog-id=158 op=UNLOAD Dec 16 12:16:11.928000 audit[3743]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.983748 containerd[2086]: time="2025-12-16T12:16:11.983605396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qpvds,Uid:b7e6b868-8fca-4abe-b28a-7976cac3fa82,Namespace:kube-system,Attempt:0,}" Dec 16 12:16:11.985068 kernel: audit: type=1300 audit(1765887371.928:468): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:12.000833 kernel: audit: type=1327 audit(1765887371.928:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:11.928000 audit: BPF prog-id=159 op=LOAD Dec 16 12:16:12.006146 kernel: audit: type=1334 audit(1765887371.928:469): prog-id=159 op=LOAD Dec 16 12:16:11.928000 audit[3743]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.022686 kernel: audit: type=1300 audit(1765887371.928:469): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:12.039950 kernel: audit: type=1327 audit(1765887371.928:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:11.928000 audit: BPF prog-id=160 op=LOAD Dec 16 12:16:11.928000 audit[3743]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:11.928000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:16:11.928000 audit[3743]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:11.928000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:16:11.928000 audit[3743]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:11.928000 audit: BPF prog-id=161 op=LOAD Dec 16 12:16:11.928000 audit[3743]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3732 pid=3743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:11.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361386136656230336532303332643362376237316362613833353133 Dec 16 12:16:12.046092 containerd[2086]: time="2025-12-16T12:16:12.046066841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-24p78,Uid:de88c53a-8ff1-44d5-a733-02cd5e521cdc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3a8a6eb03e2032d3b7b71cba83513bde276dc591ef2d0046cb601913dc01a1d2\"" Dec 16 12:16:12.047879 containerd[2086]: time="2025-12-16T12:16:12.047855763Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:16:12.064940 containerd[2086]: time="2025-12-16T12:16:12.064810229Z" level=info msg="connecting to shim a083e24a5e937deae73e71af2f1dffbe8271243063214c76a2e67690c19f05de" address="unix:///run/containerd/s/75fed8fb7556bd289a3e77d2098d51e69134927605e401806d5c28a52a15dafe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:12.081591 systemd[1]: Started cri-containerd-a083e24a5e937deae73e71af2f1dffbe8271243063214c76a2e67690c19f05de.scope - libcontainer container a083e24a5e937deae73e71af2f1dffbe8271243063214c76a2e67690c19f05de. Dec 16 12:16:12.086000 audit: BPF prog-id=162 op=LOAD Dec 16 12:16:12.086000 audit: BPF prog-id=163 op=LOAD Dec 16 12:16:12.086000 audit[3788]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3777 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383365323461356539333764656165373365373161663266316466 Dec 16 12:16:12.087000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:16:12.087000 audit[3788]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3777 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383365323461356539333764656165373365373161663266316466 Dec 16 12:16:12.087000 audit: BPF prog-id=164 op=LOAD Dec 16 12:16:12.087000 audit[3788]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3777 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383365323461356539333764656165373365373161663266316466 Dec 16 12:16:12.087000 audit: BPF prog-id=165 op=LOAD Dec 16 12:16:12.087000 audit[3788]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3777 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383365323461356539333764656165373365373161663266316466 Dec 16 12:16:12.087000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:16:12.087000 audit[3788]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3777 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383365323461356539333764656165373365373161663266316466 Dec 16 12:16:12.087000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:16:12.087000 audit[3788]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3777 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383365323461356539333764656165373365373161663266316466 Dec 16 12:16:12.087000 audit: BPF prog-id=166 op=LOAD Dec 16 12:16:12.087000 audit[3788]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3777 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.087000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383365323461356539333764656165373365373161663266316466 Dec 16 12:16:12.100241 containerd[2086]: time="2025-12-16T12:16:12.100179712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qpvds,Uid:b7e6b868-8fca-4abe-b28a-7976cac3fa82,Namespace:kube-system,Attempt:0,} returns sandbox id \"a083e24a5e937deae73e71af2f1dffbe8271243063214c76a2e67690c19f05de\"" Dec 16 12:16:12.102696 containerd[2086]: time="2025-12-16T12:16:12.102516631Z" level=info msg="CreateContainer within sandbox \"a083e24a5e937deae73e71af2f1dffbe8271243063214c76a2e67690c19f05de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:16:12.120504 containerd[2086]: time="2025-12-16T12:16:12.120134066Z" level=info msg="Container 6ff82329704c7d422e2be9d0e553b1c162bce231d3f78c81497645e4226bc901: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:12.133469 containerd[2086]: time="2025-12-16T12:16:12.133417213Z" level=info msg="CreateContainer within sandbox \"a083e24a5e937deae73e71af2f1dffbe8271243063214c76a2e67690c19f05de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6ff82329704c7d422e2be9d0e553b1c162bce231d3f78c81497645e4226bc901\"" Dec 16 12:16:12.133948 containerd[2086]: time="2025-12-16T12:16:12.133887558Z" level=info msg="StartContainer for \"6ff82329704c7d422e2be9d0e553b1c162bce231d3f78c81497645e4226bc901\"" Dec 16 12:16:12.135709 containerd[2086]: time="2025-12-16T12:16:12.135687465Z" level=info msg="connecting to shim 6ff82329704c7d422e2be9d0e553b1c162bce231d3f78c81497645e4226bc901" address="unix:///run/containerd/s/75fed8fb7556bd289a3e77d2098d51e69134927605e401806d5c28a52a15dafe" protocol=ttrpc version=3 Dec 16 12:16:12.149582 systemd[1]: Started cri-containerd-6ff82329704c7d422e2be9d0e553b1c162bce231d3f78c81497645e4226bc901.scope - libcontainer container 6ff82329704c7d422e2be9d0e553b1c162bce231d3f78c81497645e4226bc901. Dec 16 12:16:12.184000 audit: BPF prog-id=167 op=LOAD Dec 16 12:16:12.184000 audit[3815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3777 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666663832333239373034633764343232653262653964306535353362 Dec 16 12:16:12.184000 audit: BPF prog-id=168 op=LOAD Dec 16 12:16:12.184000 audit[3815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3777 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666663832333239373034633764343232653262653964306535353362 Dec 16 12:16:12.184000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:16:12.184000 audit[3815]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3777 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666663832333239373034633764343232653262653964306535353362 Dec 16 12:16:12.184000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:16:12.184000 audit[3815]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3777 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666663832333239373034633764343232653262653964306535353362 Dec 16 12:16:12.184000 audit: BPF prog-id=169 op=LOAD Dec 16 12:16:12.184000 audit[3815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3777 pid=3815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666663832333239373034633764343232653262653964306535353362 Dec 16 12:16:12.202614 containerd[2086]: time="2025-12-16T12:16:12.202586601Z" level=info msg="StartContainer for \"6ff82329704c7d422e2be9d0e553b1c162bce231d3f78c81497645e4226bc901\" returns successfully" Dec 16 12:16:12.271000 audit[3875]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.271000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe04bbb20 a2=0 a3=1 items=0 ppid=3827 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:16:12.272000 audit[3876]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.272000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6254170 a2=0 a3=1 items=0 ppid=3827 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:16:12.273000 audit[3877]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3877 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.273000 audit[3877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc3032060 a2=0 a3=1 items=0 ppid=3827 pid=3877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:16:12.274000 audit[3878]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.274000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe12632d0 a2=0 a3=1 items=0 ppid=3827 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:16:12.275000 audit[3880]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.275000 audit[3880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe02e06a0 a2=0 a3=1 items=0 ppid=3827 pid=3880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:16:12.276000 audit[3881]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=3881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.276000 audit[3881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc3b16a80 a2=0 a3=1 items=0 ppid=3827 pid=3881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:16:12.376000 audit[3882]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.376000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff32bfab0 a2=0 a3=1 items=0 ppid=3827 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:16:12.379000 audit[3884]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.379000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffd4ed620 a2=0 a3=1 items=0 ppid=3827 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:16:12.382000 audit[3887]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.382000 audit[3887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd4b5fea0 a2=0 a3=1 items=0 ppid=3827 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:16:12.383000 audit[3888]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.383000 audit[3888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa98bc20 a2=0 a3=1 items=0 ppid=3827 pid=3888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.383000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:16:12.385000 audit[3890]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.385000 audit[3890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc4908b40 a2=0 a3=1 items=0 ppid=3827 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.385000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:16:12.385000 audit[3891]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.385000 audit[3891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc524c040 a2=0 a3=1 items=0 ppid=3827 pid=3891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.385000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:16:12.387000 audit[3893]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.387000 audit[3893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdcdc8960 a2=0 a3=1 items=0 ppid=3827 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.387000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:16:12.390000 audit[3896]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.390000 audit[3896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff5dabff0 a2=0 a3=1 items=0 ppid=3827 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:16:12.391000 audit[3897]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.391000 audit[3897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcdd59480 a2=0 a3=1 items=0 ppid=3827 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.391000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:16:12.393000 audit[3899]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.393000 audit[3899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff5a77c40 a2=0 a3=1 items=0 ppid=3827 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:16:12.394000 audit[3900]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.394000 audit[3900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff40b0cc0 a2=0 a3=1 items=0 ppid=3827 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:16:12.396000 audit[3902]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.396000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeeda1be0 a2=0 a3=1 items=0 ppid=3827 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:16:12.398000 audit[3905]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.398000 audit[3905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf6be7e0 a2=0 a3=1 items=0 ppid=3827 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:16:12.401000 audit[3908]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.401000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe02142c0 a2=0 a3=1 items=0 ppid=3827 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:16:12.402000 audit[3909]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.402000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe249550 a2=0 a3=1 items=0 ppid=3827 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:16:12.404000 audit[3911]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.404000 audit[3911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff5628d70 a2=0 a3=1 items=0 ppid=3827 pid=3911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:16:12.407000 audit[3914]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.407000 audit[3914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd86b3410 a2=0 a3=1 items=0 ppid=3827 pid=3914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:16:12.407000 audit[3915]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.407000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3fb6680 a2=0 a3=1 items=0 ppid=3827 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:16:12.409000 audit[3917]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:16:12.409000 audit[3917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff4ace840 a2=0 a3=1 items=0 ppid=3827 pid=3917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:16:12.477000 audit[3923]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:12.477000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc56e0270 a2=0 a3=1 items=0 ppid=3827 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:12.508000 audit[3923]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3923 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:12.508000 audit[3923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc56e0270 a2=0 a3=1 items=0 ppid=3827 pid=3923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:12.510000 audit[3928]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.510000 audit[3928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff93599f0 a2=0 a3=1 items=0 ppid=3827 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:16:12.512000 audit[3930]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.512000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffda2c3450 a2=0 a3=1 items=0 ppid=3827 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.512000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:16:12.514000 audit[3933]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.514000 audit[3933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffeb4f1a60 a2=0 a3=1 items=0 ppid=3827 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.514000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:16:12.516000 audit[3935]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.516000 audit[3935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe201d630 a2=0 a3=1 items=0 ppid=3827 pid=3935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.516000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:16:12.519000 audit[3937]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.519000 audit[3937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd838add0 a2=0 a3=1 items=0 ppid=3827 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:16:12.520000 audit[3938]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3938 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.520000 audit[3938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff074690 a2=0 a3=1 items=0 ppid=3827 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:16:12.521000 audit[3940]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.521000 audit[3940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd70031d0 a2=0 a3=1 items=0 ppid=3827 pid=3940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.521000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:16:12.524000 audit[3943]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.524000 audit[3943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff4b5ee10 a2=0 a3=1 items=0 ppid=3827 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.524000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:16:12.525000 audit[3944]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.525000 audit[3944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd558b630 a2=0 a3=1 items=0 ppid=3827 pid=3944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.525000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:16:12.527000 audit[3946]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.527000 audit[3946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd6b48370 a2=0 a3=1 items=0 ppid=3827 pid=3946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:16:12.528000 audit[3947]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.528000 audit[3947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff574cfe0 a2=0 a3=1 items=0 ppid=3827 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.528000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:16:12.529000 audit[3949]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.529000 audit[3949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcfa8b9c0 a2=0 a3=1 items=0 ppid=3827 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.529000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:16:12.532000 audit[3952]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.532000 audit[3952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd0289310 a2=0 a3=1 items=0 ppid=3827 pid=3952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.532000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:16:12.534000 audit[3955]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.534000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff9caca60 a2=0 a3=1 items=0 ppid=3827 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:16:12.535000 audit[3956]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.535000 audit[3956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffbba8d40 a2=0 a3=1 items=0 ppid=3827 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.535000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:16:12.537000 audit[3958]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.537000 audit[3958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd87ee080 a2=0 a3=1 items=0 ppid=3827 pid=3958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:16:12.540000 audit[3961]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.540000 audit[3961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6a0cec0 a2=0 a3=1 items=0 ppid=3827 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:16:12.541000 audit[3962]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.541000 audit[3962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2da44b0 a2=0 a3=1 items=0 ppid=3827 pid=3962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.541000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:16:12.542000 audit[3964]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.542000 audit[3964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff8bcffc0 a2=0 a3=1 items=0 ppid=3827 pid=3964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.542000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:16:12.543000 audit[3965]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3965 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.543000 audit[3965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd16521a0 a2=0 a3=1 items=0 ppid=3827 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:16:12.545000 audit[3967]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.545000 audit[3967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd2a25850 a2=0 a3=1 items=0 ppid=3827 pid=3967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.545000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:16:12.548000 audit[3970]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:16:12.548000 audit[3970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc87b48b0 a2=0 a3=1 items=0 ppid=3827 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:16:12.551000 audit[3972]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:16:12.551000 audit[3972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffc9f9fc40 a2=0 a3=1 items=0 ppid=3827 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.551000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:12.551000 audit[3972]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:16:12.551000 audit[3972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc9f9fc40 a2=0 a3=1 items=0 ppid=3827 pid=3972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:12.551000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:13.414588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011658609.mount: Deactivated successfully. Dec 16 12:16:13.807099 containerd[2086]: time="2025-12-16T12:16:13.806640417Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:13.812885 containerd[2086]: time="2025-12-16T12:16:13.812836044Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:16:13.817808 containerd[2086]: time="2025-12-16T12:16:13.817767308Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:13.829955 containerd[2086]: time="2025-12-16T12:16:13.829915993Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:13.830462 containerd[2086]: time="2025-12-16T12:16:13.830294230Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.782413931s" Dec 16 12:16:13.830462 containerd[2086]: time="2025-12-16T12:16:13.830319199Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:16:13.832789 containerd[2086]: time="2025-12-16T12:16:13.832766250Z" level=info msg="CreateContainer within sandbox \"3a8a6eb03e2032d3b7b71cba83513bde276dc591ef2d0046cb601913dc01a1d2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:16:13.870213 containerd[2086]: time="2025-12-16T12:16:13.869866105Z" level=info msg="Container 9a4d94efeda73bd7c026e09b1fd68c378c02ba088eceba7c70cc64018e97f3a5: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:13.871316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330407296.mount: Deactivated successfully. Dec 16 12:16:13.881267 containerd[2086]: time="2025-12-16T12:16:13.881242572Z" level=info msg="CreateContainer within sandbox \"3a8a6eb03e2032d3b7b71cba83513bde276dc591ef2d0046cb601913dc01a1d2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9a4d94efeda73bd7c026e09b1fd68c378c02ba088eceba7c70cc64018e97f3a5\"" Dec 16 12:16:13.882828 containerd[2086]: time="2025-12-16T12:16:13.881678434Z" level=info msg="StartContainer for \"9a4d94efeda73bd7c026e09b1fd68c378c02ba088eceba7c70cc64018e97f3a5\"" Dec 16 12:16:13.883256 containerd[2086]: time="2025-12-16T12:16:13.883225863Z" level=info msg="connecting to shim 9a4d94efeda73bd7c026e09b1fd68c378c02ba088eceba7c70cc64018e97f3a5" address="unix:///run/containerd/s/954a22108f42fe448f6335156d8cbf0b108263d2367d1f59c8051c5cd9652435" protocol=ttrpc version=3 Dec 16 12:16:13.901572 systemd[1]: Started cri-containerd-9a4d94efeda73bd7c026e09b1fd68c378c02ba088eceba7c70cc64018e97f3a5.scope - libcontainer container 9a4d94efeda73bd7c026e09b1fd68c378c02ba088eceba7c70cc64018e97f3a5. Dec 16 12:16:13.907000 audit: BPF prog-id=170 op=LOAD Dec 16 12:16:13.908000 audit: BPF prog-id=171 op=LOAD Dec 16 12:16:13.908000 audit[3981]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3732 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346439346566656461373362643763303236653039623166643638 Dec 16 12:16:13.908000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:16:13.908000 audit[3981]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3732 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346439346566656461373362643763303236653039623166643638 Dec 16 12:16:13.908000 audit: BPF prog-id=172 op=LOAD Dec 16 12:16:13.908000 audit[3981]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3732 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346439346566656461373362643763303236653039623166643638 Dec 16 12:16:13.908000 audit: BPF prog-id=173 op=LOAD Dec 16 12:16:13.908000 audit[3981]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3732 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346439346566656461373362643763303236653039623166643638 Dec 16 12:16:13.908000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:16:13.908000 audit[3981]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3732 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346439346566656461373362643763303236653039623166643638 Dec 16 12:16:13.908000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:16:13.908000 audit[3981]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3732 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346439346566656461373362643763303236653039623166643638 Dec 16 12:16:13.908000 audit: BPF prog-id=174 op=LOAD Dec 16 12:16:13.908000 audit[3981]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3732 pid=3981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346439346566656461373362643763303236653039623166643638 Dec 16 12:16:13.928731 containerd[2086]: time="2025-12-16T12:16:13.928701794Z" level=info msg="StartContainer for \"9a4d94efeda73bd7c026e09b1fd68c378c02ba088eceba7c70cc64018e97f3a5\" returns successfully" Dec 16 12:16:14.052575 kubelet[3679]: I1216 12:16:14.052528 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qpvds" podStartSLOduration=3.052512375 podStartE2EDuration="3.052512375s" podCreationTimestamp="2025-12-16 12:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:13.047720484 +0000 UTC m=+6.140646041" watchObservedRunningTime="2025-12-16 12:16:14.052512375 +0000 UTC m=+7.145437924" Dec 16 12:16:14.053075 kubelet[3679]: I1216 12:16:14.052603 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-24p78" podStartSLOduration=1.269084344 podStartE2EDuration="3.052597762s" podCreationTimestamp="2025-12-16 12:16:11 +0000 UTC" firstStartedPulling="2025-12-16 12:16:12.047437451 +0000 UTC m=+5.140363000" lastFinishedPulling="2025-12-16 12:16:13.830950869 +0000 UTC m=+6.923876418" observedRunningTime="2025-12-16 12:16:14.048577161 +0000 UTC m=+7.141502710" watchObservedRunningTime="2025-12-16 12:16:14.052597762 +0000 UTC m=+7.145523311" Dec 16 12:16:19.011633 sudo[2607]: pam_unix(sudo:session): session closed for user root Dec 16 12:16:19.028006 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:16:19.028086 kernel: audit: type=1106 audit(1765887379.011:546): pid=2607 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:19.011000 audit[2607]: USER_END pid=2607 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:19.011000 audit[2607]: CRED_DISP pid=2607 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:19.044442 kernel: audit: type=1104 audit(1765887379.011:547): pid=2607 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:16:19.099364 sshd[2606]: Connection closed by 10.200.16.10 port 40826 Dec 16 12:16:19.098159 sshd-session[2602]: pam_unix(sshd:session): session closed for user core Dec 16 12:16:19.099000 audit[2602]: USER_END pid=2602 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:19.102875 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:16:19.103321 systemd[1]: session-10.scope: Consumed 2.742s CPU time, 216.9M memory peak. Dec 16 12:16:19.104320 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:40826.service: Deactivated successfully. Dec 16 12:16:19.099000 audit[2602]: CRED_DISP pid=2602 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:19.121131 systemd-logind[2058]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:16:19.121670 kernel: audit: type=1106 audit(1765887379.099:548): pid=2602 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:19.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.11:22-10.200.16.10:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:19.152726 kernel: audit: type=1104 audit(1765887379.099:549): pid=2602 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:16:19.152793 kernel: audit: type=1131 audit(1765887379.104:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.11:22-10.200.16.10:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:16:19.152861 systemd-logind[2058]: Removed session 10. Dec 16 12:16:20.802000 audit[4062]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:20.818475 kernel: audit: type=1325 audit(1765887380.802:551): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:20.802000 audit[4062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffeaa4eec0 a2=0 a3=1 items=0 ppid=3827 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:20.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:20.851090 kernel: audit: type=1300 audit(1765887380.802:551): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffeaa4eec0 a2=0 a3=1 items=0 ppid=3827 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:20.851147 kernel: audit: type=1327 audit(1765887380.802:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:20.819000 audit[4062]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:20.819000 audit[4062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeaa4eec0 a2=0 a3=1 items=0 ppid=3827 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:20.880255 kernel: audit: type=1325 audit(1765887380.819:552): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:20.880309 kernel: audit: type=1300 audit(1765887380.819:552): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeaa4eec0 a2=0 a3=1 items=0 ppid=3827 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:20.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:20.901000 audit[4064]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:20.901000 audit[4064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa360e50 a2=0 a3=1 items=0 ppid=3827 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:20.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:20.905000 audit[4064]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4064 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:20.905000 audit[4064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa360e50 a2=0 a3=1 items=0 ppid=3827 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:20.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:23.476000 audit[4066]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:23.476000 audit[4066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe4ef79d0 a2=0 a3=1 items=0 ppid=3827 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:23.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:23.482000 audit[4066]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4066 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:23.482000 audit[4066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe4ef79d0 a2=0 a3=1 items=0 ppid=3827 pid=4066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:23.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:23.519000 audit[4068]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:23.519000 audit[4068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffda8863a0 a2=0 a3=1 items=0 ppid=3827 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:23.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:23.525000 audit[4068]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4068 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:23.525000 audit[4068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffda8863a0 a2=0 a3=1 items=0 ppid=3827 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:23.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:24.537000 audit[4071]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:24.540646 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 12:16:24.540694 kernel: audit: type=1325 audit(1765887384.537:559): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:24.537000 audit[4071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc314c9c0 a2=0 a3=1 items=0 ppid=3827 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:24.566449 kernel: audit: type=1300 audit(1765887384.537:559): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc314c9c0 a2=0 a3=1 items=0 ppid=3827 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:24.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:24.575513 kernel: audit: type=1327 audit(1765887384.537:559): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:24.577000 audit[4071]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:24.577000 audit[4071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc314c9c0 a2=0 a3=1 items=0 ppid=3827 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:24.604290 kernel: audit: type=1325 audit(1765887384.577:560): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:24.604362 kernel: audit: type=1300 audit(1765887384.577:560): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc314c9c0 a2=0 a3=1 items=0 ppid=3827 pid=4071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:24.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:24.613722 kernel: audit: type=1327 audit(1765887384.577:560): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:25.092003 systemd[1]: Created slice kubepods-besteffort-podf2bb9774_6c05_4991_a9dc_b0ce0dcc3ff2.slice - libcontainer container kubepods-besteffort-podf2bb9774_6c05_4991_a9dc_b0ce0dcc3ff2.slice. Dec 16 12:16:25.093095 kubelet[3679]: I1216 12:16:25.093020 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2bb9774-6c05-4991-a9dc-b0ce0dcc3ff2-tigera-ca-bundle\") pod \"calico-typha-656ccd948b-4v8zh\" (UID: \"f2bb9774-6c05-4991-a9dc-b0ce0dcc3ff2\") " pod="calico-system/calico-typha-656ccd948b-4v8zh" Dec 16 12:16:25.093095 kubelet[3679]: I1216 12:16:25.093056 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k72r\" (UniqueName: \"kubernetes.io/projected/f2bb9774-6c05-4991-a9dc-b0ce0dcc3ff2-kube-api-access-5k72r\") pod \"calico-typha-656ccd948b-4v8zh\" (UID: \"f2bb9774-6c05-4991-a9dc-b0ce0dcc3ff2\") " pod="calico-system/calico-typha-656ccd948b-4v8zh" Dec 16 12:16:25.093095 kubelet[3679]: I1216 12:16:25.093070 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f2bb9774-6c05-4991-a9dc-b0ce0dcc3ff2-typha-certs\") pod \"calico-typha-656ccd948b-4v8zh\" (UID: \"f2bb9774-6c05-4991-a9dc-b0ce0dcc3ff2\") " pod="calico-system/calico-typha-656ccd948b-4v8zh" Dec 16 12:16:25.273946 systemd[1]: Created slice kubepods-besteffort-pod4bcdf381_31cc_449b_b406_e84051860259.slice - libcontainer container kubepods-besteffort-pod4bcdf381_31cc_449b_b406_e84051860259.slice. Dec 16 12:16:25.394040 kubelet[3679]: I1216 12:16:25.393901 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4bcdf381-31cc-449b-b406-e84051860259-node-certs\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394040 kubelet[3679]: I1216 12:16:25.393984 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-policysync\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394040 kubelet[3679]: I1216 12:16:25.394000 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-var-lib-calico\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394040 kubelet[3679]: I1216 12:16:25.394027 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-xtables-lock\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394203 kubelet[3679]: I1216 12:16:25.394045 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8bf\" (UniqueName: \"kubernetes.io/projected/4bcdf381-31cc-449b-b406-e84051860259-kube-api-access-gl8bf\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394203 kubelet[3679]: I1216 12:16:25.394066 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-cni-bin-dir\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394203 kubelet[3679]: I1216 12:16:25.394077 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-cni-log-dir\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394203 kubelet[3679]: I1216 12:16:25.394088 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-cni-net-dir\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394203 kubelet[3679]: I1216 12:16:25.394097 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bcdf381-31cc-449b-b406-e84051860259-tigera-ca-bundle\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394322 kubelet[3679]: I1216 12:16:25.394107 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-flexvol-driver-host\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394322 kubelet[3679]: I1216 12:16:25.394120 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-var-run-calico\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.394322 kubelet[3679]: I1216 12:16:25.394130 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bcdf381-31cc-449b-b406-e84051860259-lib-modules\") pod \"calico-node-7b48v\" (UID: \"4bcdf381-31cc-449b-b406-e84051860259\") " pod="calico-system/calico-node-7b48v" Dec 16 12:16:25.396636 containerd[2086]: time="2025-12-16T12:16:25.396475468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656ccd948b-4v8zh,Uid:f2bb9774-6c05-4991-a9dc-b0ce0dcc3ff2,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:25.433832 containerd[2086]: time="2025-12-16T12:16:25.433795922Z" level=info msg="connecting to shim 21f944863473d723bb57f6723ae25734ec9f796ed201f43267acf03f7f82e67e" address="unix:///run/containerd/s/20464f538a9ed4664866bfe2245378221144df8af127052ef70c7b80a14a81cb" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:25.458229 systemd[1]: Started cri-containerd-21f944863473d723bb57f6723ae25734ec9f796ed201f43267acf03f7f82e67e.scope - libcontainer container 21f944863473d723bb57f6723ae25734ec9f796ed201f43267acf03f7f82e67e. Dec 16 12:16:25.467000 audit: BPF prog-id=175 op=LOAD Dec 16 12:16:25.472000 audit: BPF prog-id=176 op=LOAD Dec 16 12:16:25.476742 kernel: audit: type=1334 audit(1765887385.467:561): prog-id=175 op=LOAD Dec 16 12:16:25.476775 kernel: audit: type=1334 audit(1765887385.472:562): prog-id=176 op=LOAD Dec 16 12:16:25.479462 kubelet[3679]: E1216 12:16:25.479308 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:25.472000 audit[4093]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4081 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.495539 kernel: audit: type=1300 audit(1765887385.472:562): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4081 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.497632 kubelet[3679]: E1216 12:16:25.497563 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.497632 kubelet[3679]: W1216 12:16:25.497581 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.497632 kubelet[3679]: E1216 12:16:25.497596 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.497773 kubelet[3679]: E1216 12:16:25.497727 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.497773 kubelet[3679]: W1216 12:16:25.497740 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.497773 kubelet[3679]: E1216 12:16:25.497747 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.498010 kubelet[3679]: E1216 12:16:25.497834 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.498010 kubelet[3679]: W1216 12:16:25.497844 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.498010 kubelet[3679]: E1216 12:16:25.497850 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.498010 kubelet[3679]: E1216 12:16:25.497982 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.498010 kubelet[3679]: W1216 12:16:25.497988 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.498769 kubelet[3679]: E1216 12:16:25.498753 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.499272 kubelet[3679]: W1216 12:16:25.499005 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.499272 kubelet[3679]: E1216 12:16:25.499021 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.501547 kubelet[3679]: E1216 12:16:25.499437 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.501738 kubelet[3679]: E1216 12:16:25.501725 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.501909 kubelet[3679]: W1216 12:16:25.501809 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.501909 kubelet[3679]: E1216 12:16:25.501826 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663934343836333437336437323362623537663637323361653235 Dec 16 12:16:25.505622 kubelet[3679]: E1216 12:16:25.504105 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.505622 kubelet[3679]: W1216 12:16:25.505482 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.505622 kubelet[3679]: E1216 12:16:25.505506 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.506743 kubelet[3679]: E1216 12:16:25.506705 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.506847 kubelet[3679]: W1216 12:16:25.506835 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.506999 kubelet[3679]: E1216 12:16:25.506973 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.507254 kubelet[3679]: E1216 12:16:25.507244 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.507341 kubelet[3679]: W1216 12:16:25.507298 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.507508 kubelet[3679]: E1216 12:16:25.507493 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.507786 kubelet[3679]: E1216 12:16:25.507764 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.507786 kubelet[3679]: W1216 12:16:25.507775 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.507976 kubelet[3679]: E1216 12:16:25.507963 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.508159 kubelet[3679]: E1216 12:16:25.508139 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.508159 kubelet[3679]: W1216 12:16:25.508148 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.508366 kubelet[3679]: E1216 12:16:25.508204 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.508546 kubelet[3679]: E1216 12:16:25.508534 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.508657 kubelet[3679]: W1216 12:16:25.508599 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.508657 kubelet[3679]: E1216 12:16:25.508623 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.508892 kubelet[3679]: E1216 12:16:25.508872 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.508955 kubelet[3679]: W1216 12:16:25.508939 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.509023 kubelet[3679]: E1216 12:16:25.509012 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.509307 kubelet[3679]: E1216 12:16:25.509298 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.509394 kubelet[3679]: W1216 12:16:25.509384 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.509589 kubelet[3679]: E1216 12:16:25.509479 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.510843 kubelet[3679]: E1216 12:16:25.510823 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.510983 kubelet[3679]: W1216 12:16:25.510912 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.510983 kubelet[3679]: E1216 12:16:25.510936 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.511351 kubelet[3679]: E1216 12:16:25.511340 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.511562 kubelet[3679]: W1216 12:16:25.511410 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.511716 kubelet[3679]: E1216 12:16:25.511637 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.512742 kubelet[3679]: E1216 12:16:25.512725 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.512883 kubelet[3679]: W1216 12:16:25.512785 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.512883 kubelet[3679]: E1216 12:16:25.512814 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.516767 kernel: audit: type=1327 audit(1765887385.472:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663934343836333437336437323362623537663637323361653235 Dec 16 12:16:25.472000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:16:25.472000 audit[4093]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663934343836333437336437323362623537663637323361653235 Dec 16 12:16:25.472000 audit: BPF prog-id=177 op=LOAD Dec 16 12:16:25.472000 audit[4093]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4081 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663934343836333437336437323362623537663637323361653235 Dec 16 12:16:25.476000 audit: BPF prog-id=178 op=LOAD Dec 16 12:16:25.476000 audit[4093]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4081 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663934343836333437336437323362623537663637323361653235 Dec 16 12:16:25.476000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:16:25.476000 audit[4093]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663934343836333437336437323362623537663637323361653235 Dec 16 12:16:25.476000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:16:25.476000 audit[4093]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663934343836333437336437323362623537663637323361653235 Dec 16 12:16:25.476000 audit: BPF prog-id=179 op=LOAD Dec 16 12:16:25.476000 audit[4093]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4081 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663934343836333437336437323362623537663637323361653235 Dec 16 12:16:25.528930 kubelet[3679]: E1216 12:16:25.528917 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.529079 kubelet[3679]: W1216 12:16:25.528976 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.529079 kubelet[3679]: E1216 12:16:25.528990 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.548223 containerd[2086]: time="2025-12-16T12:16:25.548173206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-656ccd948b-4v8zh,Uid:f2bb9774-6c05-4991-a9dc-b0ce0dcc3ff2,Namespace:calico-system,Attempt:0,} returns sandbox id \"21f944863473d723bb57f6723ae25734ec9f796ed201f43267acf03f7f82e67e\"" Dec 16 12:16:25.551220 containerd[2086]: time="2025-12-16T12:16:25.551189861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:16:25.578956 containerd[2086]: time="2025-12-16T12:16:25.578680420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7b48v,Uid:4bcdf381-31cc-449b-b406-e84051860259,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:25.596986 kubelet[3679]: E1216 12:16:25.596963 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.596986 kubelet[3679]: W1216 12:16:25.596979 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.596986 kubelet[3679]: E1216 12:16:25.596991 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.597276 kubelet[3679]: I1216 12:16:25.597010 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/03762c75-9df3-49a2-9166-fb2b4578d7a1-kubelet-dir\") pod \"csi-node-driver-mk7sc\" (UID: \"03762c75-9df3-49a2-9166-fb2b4578d7a1\") " pod="calico-system/csi-node-driver-mk7sc" Dec 16 12:16:25.597276 kubelet[3679]: E1216 12:16:25.597128 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.597276 kubelet[3679]: W1216 12:16:25.597135 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.597276 kubelet[3679]: E1216 12:16:25.597145 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.597276 kubelet[3679]: I1216 12:16:25.597156 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/03762c75-9df3-49a2-9166-fb2b4578d7a1-registration-dir\") pod \"csi-node-driver-mk7sc\" (UID: \"03762c75-9df3-49a2-9166-fb2b4578d7a1\") " pod="calico-system/csi-node-driver-mk7sc" Dec 16 12:16:25.597470 kubelet[3679]: E1216 12:16:25.597423 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.597470 kubelet[3679]: W1216 12:16:25.597435 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.597560 kubelet[3679]: E1216 12:16:25.597548 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.597739 kubelet[3679]: E1216 12:16:25.597724 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.597739 kubelet[3679]: W1216 12:16:25.597735 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.597739 kubelet[3679]: E1216 12:16:25.597746 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.597975 kubelet[3679]: I1216 12:16:25.597759 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/03762c75-9df3-49a2-9166-fb2b4578d7a1-varrun\") pod \"csi-node-driver-mk7sc\" (UID: \"03762c75-9df3-49a2-9166-fb2b4578d7a1\") " pod="calico-system/csi-node-driver-mk7sc" Dec 16 12:16:25.597975 kubelet[3679]: E1216 12:16:25.597865 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.597975 kubelet[3679]: W1216 12:16:25.597871 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.597975 kubelet[3679]: E1216 12:16:25.597880 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.598365 kubelet[3679]: E1216 12:16:25.598262 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.598365 kubelet[3679]: W1216 12:16:25.598273 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.598365 kubelet[3679]: E1216 12:16:25.598283 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.598408 kubelet[3679]: E1216 12:16:25.598400 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.598408 kubelet[3679]: W1216 12:16:25.598406 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.598433 kubelet[3679]: E1216 12:16:25.598412 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.598607 kubelet[3679]: E1216 12:16:25.598594 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.598607 kubelet[3679]: W1216 12:16:25.598603 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.598607 kubelet[3679]: E1216 12:16:25.598609 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.599367 kubelet[3679]: E1216 12:16:25.598707 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.599367 kubelet[3679]: W1216 12:16:25.598712 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.599367 kubelet[3679]: E1216 12:16:25.598718 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.599367 kubelet[3679]: E1216 12:16:25.598854 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.599367 kubelet[3679]: W1216 12:16:25.598860 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.599367 kubelet[3679]: E1216 12:16:25.598866 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.599367 kubelet[3679]: I1216 12:16:25.598879 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/03762c75-9df3-49a2-9166-fb2b4578d7a1-socket-dir\") pod \"csi-node-driver-mk7sc\" (UID: \"03762c75-9df3-49a2-9166-fb2b4578d7a1\") " pod="calico-system/csi-node-driver-mk7sc" Dec 16 12:16:25.599367 kubelet[3679]: E1216 12:16:25.599004 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.599367 kubelet[3679]: W1216 12:16:25.599011 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.599904 kubelet[3679]: E1216 12:16:25.599020 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.599904 kubelet[3679]: I1216 12:16:25.599029 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd6kb\" (UniqueName: \"kubernetes.io/projected/03762c75-9df3-49a2-9166-fb2b4578d7a1-kube-api-access-jd6kb\") pod \"csi-node-driver-mk7sc\" (UID: \"03762c75-9df3-49a2-9166-fb2b4578d7a1\") " pod="calico-system/csi-node-driver-mk7sc" Dec 16 12:16:25.599904 kubelet[3679]: E1216 12:16:25.599542 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.599904 kubelet[3679]: W1216 12:16:25.599554 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.599904 kubelet[3679]: E1216 12:16:25.599568 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.600438 kubelet[3679]: E1216 12:16:25.600129 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.600438 kubelet[3679]: W1216 12:16:25.600142 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.600438 kubelet[3679]: E1216 12:16:25.600173 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.600756 kubelet[3679]: E1216 12:16:25.600739 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.600756 kubelet[3679]: W1216 12:16:25.600752 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.600979 kubelet[3679]: E1216 12:16:25.600763 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.601081 kubelet[3679]: E1216 12:16:25.601065 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.601139 kubelet[3679]: W1216 12:16:25.601081 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.601139 kubelet[3679]: E1216 12:16:25.601091 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.618724 containerd[2086]: time="2025-12-16T12:16:25.618497111Z" level=info msg="connecting to shim fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab" address="unix:///run/containerd/s/dff5e552e2feba439f260a7f6233af7362952d1f48670c7f1fb6db3ce54c0170" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:25.626000 audit[4182]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:25.626000 audit[4182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff468b6b0 a2=0 a3=1 items=0 ppid=3827 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:25.630000 audit[4182]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:25.630000 audit[4182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff468b6b0 a2=0 a3=1 items=0 ppid=3827 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.630000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:25.641608 systemd[1]: Started cri-containerd-fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab.scope - libcontainer container fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab. Dec 16 12:16:25.649000 audit: BPF prog-id=180 op=LOAD Dec 16 12:16:25.650000 audit: BPF prog-id=181 op=LOAD Dec 16 12:16:25.650000 audit[4181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4169 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303538383765393431346133653862356439316164666266353166 Dec 16 12:16:25.650000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:16:25.650000 audit[4181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303538383765393431346133653862356439316164666266353166 Dec 16 12:16:25.650000 audit: BPF prog-id=182 op=LOAD Dec 16 12:16:25.650000 audit[4181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4169 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303538383765393431346133653862356439316164666266353166 Dec 16 12:16:25.650000 audit: BPF prog-id=183 op=LOAD Dec 16 12:16:25.650000 audit[4181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4169 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303538383765393431346133653862356439316164666266353166 Dec 16 12:16:25.650000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:16:25.650000 audit[4181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303538383765393431346133653862356439316164666266353166 Dec 16 12:16:25.650000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:16:25.650000 audit[4181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303538383765393431346133653862356439316164666266353166 Dec 16 12:16:25.650000 audit: BPF prog-id=184 op=LOAD Dec 16 12:16:25.650000 audit[4181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4169 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:25.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665303538383765393431346133653862356439316164666266353166 Dec 16 12:16:25.665385 containerd[2086]: time="2025-12-16T12:16:25.665349217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7b48v,Uid:4bcdf381-31cc-449b-b406-e84051860259,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab\"" Dec 16 12:16:25.700245 kubelet[3679]: E1216 12:16:25.700213 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.700245 kubelet[3679]: W1216 12:16:25.700232 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.700406 kubelet[3679]: E1216 12:16:25.700342 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.700600 kubelet[3679]: E1216 12:16:25.700586 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.700676 kubelet[3679]: W1216 12:16:25.700666 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.700740 kubelet[3679]: E1216 12:16:25.700726 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.700958 kubelet[3679]: E1216 12:16:25.700940 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.700958 kubelet[3679]: W1216 12:16:25.700955 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.701036 kubelet[3679]: E1216 12:16:25.700968 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.701099 kubelet[3679]: E1216 12:16:25.701079 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.701099 kubelet[3679]: W1216 12:16:25.701086 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.701099 kubelet[3679]: E1216 12:16:25.701095 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.701226 kubelet[3679]: E1216 12:16:25.701214 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.701226 kubelet[3679]: W1216 12:16:25.701223 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.701270 kubelet[3679]: E1216 12:16:25.701236 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.701373 kubelet[3679]: E1216 12:16:25.701364 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.701373 kubelet[3679]: W1216 12:16:25.701371 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.701422 kubelet[3679]: E1216 12:16:25.701391 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.701542 kubelet[3679]: E1216 12:16:25.701528 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.701575 kubelet[3679]: W1216 12:16:25.701550 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.701575 kubelet[3679]: E1216 12:16:25.701561 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.701731 kubelet[3679]: E1216 12:16:25.701713 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.701731 kubelet[3679]: W1216 12:16:25.701727 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.701887 kubelet[3679]: E1216 12:16:25.701739 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.701968 kubelet[3679]: E1216 12:16:25.701957 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.702033 kubelet[3679]: W1216 12:16:25.702023 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.702139 kubelet[3679]: E1216 12:16:25.702093 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.702309 kubelet[3679]: E1216 12:16:25.702299 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.702375 kubelet[3679]: W1216 12:16:25.702365 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.702462 kubelet[3679]: E1216 12:16:25.702423 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.702753 kubelet[3679]: E1216 12:16:25.702637 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.702753 kubelet[3679]: W1216 12:16:25.702647 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.702753 kubelet[3679]: E1216 12:16:25.702662 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.702832 kubelet[3679]: E1216 12:16:25.702813 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.702832 kubelet[3679]: W1216 12:16:25.702820 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.702832 kubelet[3679]: E1216 12:16:25.702828 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.702988 kubelet[3679]: E1216 12:16:25.702975 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.702988 kubelet[3679]: W1216 12:16:25.702984 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.703086 kubelet[3679]: E1216 12:16:25.702992 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.703086 kubelet[3679]: E1216 12:16:25.703083 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.703244 kubelet[3679]: W1216 12:16:25.703088 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.703244 kubelet[3679]: E1216 12:16:25.703096 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.703336 kubelet[3679]: E1216 12:16:25.703325 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.703384 kubelet[3679]: W1216 12:16:25.703375 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.703433 kubelet[3679]: E1216 12:16:25.703425 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.703562 kubelet[3679]: E1216 12:16:25.703550 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.703562 kubelet[3679]: W1216 12:16:25.703559 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.703697 kubelet[3679]: E1216 12:16:25.703570 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.703780 kubelet[3679]: E1216 12:16:25.703769 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.703780 kubelet[3679]: W1216 12:16:25.703777 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.703905 kubelet[3679]: E1216 12:16:25.703811 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.703905 kubelet[3679]: E1216 12:16:25.703856 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.703905 kubelet[3679]: W1216 12:16:25.703861 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.703905 kubelet[3679]: E1216 12:16:25.703880 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.704126 kubelet[3679]: E1216 12:16:25.704116 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.704126 kubelet[3679]: W1216 12:16:25.704123 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.704177 kubelet[3679]: E1216 12:16:25.704133 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.704579 kubelet[3679]: E1216 12:16:25.704511 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.704579 kubelet[3679]: W1216 12:16:25.704533 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.704579 kubelet[3679]: E1216 12:16:25.704549 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.704975 kubelet[3679]: E1216 12:16:25.704955 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.704975 kubelet[3679]: W1216 12:16:25.704970 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.705066 kubelet[3679]: E1216 12:16:25.704984 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.705274 kubelet[3679]: E1216 12:16:25.705257 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.705274 kubelet[3679]: W1216 12:16:25.705270 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.705344 kubelet[3679]: E1216 12:16:25.705284 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.705641 kubelet[3679]: E1216 12:16:25.705624 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.705641 kubelet[3679]: W1216 12:16:25.705637 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.705684 kubelet[3679]: E1216 12:16:25.705649 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.705826 kubelet[3679]: E1216 12:16:25.705814 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.705826 kubelet[3679]: W1216 12:16:25.705824 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.705989 kubelet[3679]: E1216 12:16:25.705884 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.706145 kubelet[3679]: E1216 12:16:25.706123 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.706145 kubelet[3679]: W1216 12:16:25.706143 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.706189 kubelet[3679]: E1216 12:16:25.706154 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:25.712994 kubelet[3679]: E1216 12:16:25.712935 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:25.712994 kubelet[3679]: W1216 12:16:25.712951 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:25.712994 kubelet[3679]: E1216 12:16:25.712962 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:26.626052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495848316.mount: Deactivated successfully. Dec 16 12:16:26.983105 kubelet[3679]: E1216 12:16:26.982733 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:27.116281 containerd[2086]: time="2025-12-16T12:16:27.116227919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:27.118561 containerd[2086]: time="2025-12-16T12:16:27.118522389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 12:16:27.123922 containerd[2086]: time="2025-12-16T12:16:27.123231422Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:27.126783 containerd[2086]: time="2025-12-16T12:16:27.126754670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:27.127389 containerd[2086]: time="2025-12-16T12:16:27.127367811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.576144517s" Dec 16 12:16:27.127526 containerd[2086]: time="2025-12-16T12:16:27.127510959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:16:27.129804 containerd[2086]: time="2025-12-16T12:16:27.129780165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:16:27.139072 containerd[2086]: time="2025-12-16T12:16:27.139038880Z" level=info msg="CreateContainer within sandbox \"21f944863473d723bb57f6723ae25734ec9f796ed201f43267acf03f7f82e67e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:16:27.152245 containerd[2086]: time="2025-12-16T12:16:27.151934710Z" level=info msg="Container 13eaeb18cd4b44292b747157de597a85398bc9bcf07b837e79f35a9f135aba73: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:27.164619 containerd[2086]: time="2025-12-16T12:16:27.164581637Z" level=info msg="CreateContainer within sandbox \"21f944863473d723bb57f6723ae25734ec9f796ed201f43267acf03f7f82e67e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"13eaeb18cd4b44292b747157de597a85398bc9bcf07b837e79f35a9f135aba73\"" Dec 16 12:16:27.165687 containerd[2086]: time="2025-12-16T12:16:27.165662378Z" level=info msg="StartContainer for \"13eaeb18cd4b44292b747157de597a85398bc9bcf07b837e79f35a9f135aba73\"" Dec 16 12:16:27.166926 containerd[2086]: time="2025-12-16T12:16:27.166899956Z" level=info msg="connecting to shim 13eaeb18cd4b44292b747157de597a85398bc9bcf07b837e79f35a9f135aba73" address="unix:///run/containerd/s/20464f538a9ed4664866bfe2245378221144df8af127052ef70c7b80a14a81cb" protocol=ttrpc version=3 Dec 16 12:16:27.183580 systemd[1]: Started cri-containerd-13eaeb18cd4b44292b747157de597a85398bc9bcf07b837e79f35a9f135aba73.scope - libcontainer container 13eaeb18cd4b44292b747157de597a85398bc9bcf07b837e79f35a9f135aba73. Dec 16 12:16:27.196000 audit: BPF prog-id=185 op=LOAD Dec 16 12:16:27.196000 audit: BPF prog-id=186 op=LOAD Dec 16 12:16:27.196000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4081 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:27.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656165623138636434623434323932623734373135376465353937 Dec 16 12:16:27.196000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:16:27.196000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:27.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656165623138636434623434323932623734373135376465353937 Dec 16 12:16:27.197000 audit: BPF prog-id=187 op=LOAD Dec 16 12:16:27.197000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4081 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:27.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656165623138636434623434323932623734373135376465353937 Dec 16 12:16:27.197000 audit: BPF prog-id=188 op=LOAD Dec 16 12:16:27.197000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4081 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:27.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656165623138636434623434323932623734373135376465353937 Dec 16 12:16:27.197000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:16:27.197000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:27.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656165623138636434623434323932623734373135376465353937 Dec 16 12:16:27.197000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:16:27.197000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4081 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:27.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656165623138636434623434323932623734373135376465353937 Dec 16 12:16:27.197000 audit: BPF prog-id=189 op=LOAD Dec 16 12:16:27.197000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4081 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:27.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133656165623138636434623434323932623734373135376465353937 Dec 16 12:16:27.224231 containerd[2086]: time="2025-12-16T12:16:27.224184449Z" level=info msg="StartContainer for \"13eaeb18cd4b44292b747157de597a85398bc9bcf07b837e79f35a9f135aba73\" returns successfully" Dec 16 12:16:28.084467 kubelet[3679]: I1216 12:16:28.084343 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-656ccd948b-4v8zh" podStartSLOduration=1.505290893 podStartE2EDuration="3.0843291s" podCreationTimestamp="2025-12-16 12:16:25 +0000 UTC" firstStartedPulling="2025-12-16 12:16:25.549343222 +0000 UTC m=+18.642268779" lastFinishedPulling="2025-12-16 12:16:27.128381437 +0000 UTC m=+20.221306986" observedRunningTime="2025-12-16 12:16:28.083852236 +0000 UTC m=+21.176777793" watchObservedRunningTime="2025-12-16 12:16:28.0843291 +0000 UTC m=+21.177254649" Dec 16 12:16:28.113733 kubelet[3679]: E1216 12:16:28.113710 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.113733 kubelet[3679]: W1216 12:16:28.113728 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.113834 kubelet[3679]: E1216 12:16:28.113746 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.113888 kubelet[3679]: E1216 12:16:28.113874 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.113962 kubelet[3679]: W1216 12:16:28.113883 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.113962 kubelet[3679]: E1216 12:16:28.113920 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114040 kubelet[3679]: E1216 12:16:28.114028 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114040 kubelet[3679]: W1216 12:16:28.114037 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114080 kubelet[3679]: E1216 12:16:28.114043 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114154 kubelet[3679]: E1216 12:16:28.114142 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114184 kubelet[3679]: W1216 12:16:28.114157 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114184 kubelet[3679]: E1216 12:16:28.114163 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114273 kubelet[3679]: E1216 12:16:28.114263 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114273 kubelet[3679]: W1216 12:16:28.114269 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114319 kubelet[3679]: E1216 12:16:28.114275 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114369 kubelet[3679]: E1216 12:16:28.114358 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114369 kubelet[3679]: W1216 12:16:28.114365 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114417 kubelet[3679]: E1216 12:16:28.114370 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114487 kubelet[3679]: E1216 12:16:28.114475 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114487 kubelet[3679]: W1216 12:16:28.114483 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114536 kubelet[3679]: E1216 12:16:28.114488 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114592 kubelet[3679]: E1216 12:16:28.114582 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114592 kubelet[3679]: W1216 12:16:28.114590 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114639 kubelet[3679]: E1216 12:16:28.114595 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114705 kubelet[3679]: E1216 12:16:28.114693 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114705 kubelet[3679]: W1216 12:16:28.114701 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114750 kubelet[3679]: E1216 12:16:28.114707 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114798 kubelet[3679]: E1216 12:16:28.114788 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114798 kubelet[3679]: W1216 12:16:28.114795 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114846 kubelet[3679]: E1216 12:16:28.114800 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114892 kubelet[3679]: E1216 12:16:28.114881 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.114892 kubelet[3679]: W1216 12:16:28.114888 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.114937 kubelet[3679]: E1216 12:16:28.114893 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.114985 kubelet[3679]: E1216 12:16:28.114974 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.115021 kubelet[3679]: W1216 12:16:28.114981 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.115021 kubelet[3679]: E1216 12:16:28.114992 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.115085 kubelet[3679]: E1216 12:16:28.115075 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.115085 kubelet[3679]: W1216 12:16:28.115081 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.115085 kubelet[3679]: E1216 12:16:28.115086 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.115182 kubelet[3679]: E1216 12:16:28.115172 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.115182 kubelet[3679]: W1216 12:16:28.115179 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.115229 kubelet[3679]: E1216 12:16:28.115184 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.115272 kubelet[3679]: E1216 12:16:28.115261 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.115272 kubelet[3679]: W1216 12:16:28.115268 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.115272 kubelet[3679]: E1216 12:16:28.115272 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.122826 kubelet[3679]: E1216 12:16:28.122716 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.122826 kubelet[3679]: W1216 12:16:28.122729 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.122826 kubelet[3679]: E1216 12:16:28.122739 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.122999 kubelet[3679]: E1216 12:16:28.122976 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.123071 kubelet[3679]: W1216 12:16:28.122987 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.123126 kubelet[3679]: E1216 12:16:28.123116 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.123278 kubelet[3679]: E1216 12:16:28.123261 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.123278 kubelet[3679]: W1216 12:16:28.123275 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.123352 kubelet[3679]: E1216 12:16:28.123288 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.123406 kubelet[3679]: E1216 12:16:28.123395 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.123406 kubelet[3679]: W1216 12:16:28.123402 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.123515 kubelet[3679]: E1216 12:16:28.123411 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.123556 kubelet[3679]: E1216 12:16:28.123526 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.123556 kubelet[3679]: W1216 12:16:28.123532 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.123556 kubelet[3679]: E1216 12:16:28.123541 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.123670 kubelet[3679]: E1216 12:16:28.123657 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.123670 kubelet[3679]: W1216 12:16:28.123667 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.123708 kubelet[3679]: E1216 12:16:28.123676 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.124023 kubelet[3679]: E1216 12:16:28.123939 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.124023 kubelet[3679]: W1216 12:16:28.123950 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.124023 kubelet[3679]: E1216 12:16:28.123966 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.124236 kubelet[3679]: E1216 12:16:28.124226 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.124295 kubelet[3679]: W1216 12:16:28.124285 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.124346 kubelet[3679]: E1216 12:16:28.124337 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.124472 kubelet[3679]: E1216 12:16:28.124444 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.124472 kubelet[3679]: W1216 12:16:28.124469 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.124551 kubelet[3679]: E1216 12:16:28.124480 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.124666 kubelet[3679]: E1216 12:16:28.124573 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.124666 kubelet[3679]: W1216 12:16:28.124578 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.124666 kubelet[3679]: E1216 12:16:28.124588 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.124856 kubelet[3679]: E1216 12:16:28.124790 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.124856 kubelet[3679]: W1216 12:16:28.124800 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.124856 kubelet[3679]: E1216 12:16:28.124816 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.125105 kubelet[3679]: E1216 12:16:28.125049 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.125105 kubelet[3679]: W1216 12:16:28.125060 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.125105 kubelet[3679]: E1216 12:16:28.125075 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.125364 kubelet[3679]: E1216 12:16:28.125295 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.125364 kubelet[3679]: W1216 12:16:28.125305 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.125364 kubelet[3679]: E1216 12:16:28.125319 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.125725 kubelet[3679]: E1216 12:16:28.125604 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.125725 kubelet[3679]: W1216 12:16:28.125617 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.125725 kubelet[3679]: E1216 12:16:28.125633 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.125812 kubelet[3679]: E1216 12:16:28.125774 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.125812 kubelet[3679]: W1216 12:16:28.125781 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.125812 kubelet[3679]: E1216 12:16:28.125787 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.125891 kubelet[3679]: E1216 12:16:28.125881 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.125891 kubelet[3679]: W1216 12:16:28.125888 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.125936 kubelet[3679]: E1216 12:16:28.125894 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.126345 kubelet[3679]: E1216 12:16:28.126249 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.126345 kubelet[3679]: W1216 12:16:28.126262 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.126345 kubelet[3679]: E1216 12:16:28.126270 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.126550 kubelet[3679]: E1216 12:16:28.126533 3679 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:16:28.126616 kubelet[3679]: W1216 12:16:28.126606 3679 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:16:28.126665 kubelet[3679]: E1216 12:16:28.126655 3679 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:16:28.167000 audit[4320]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:28.167000 audit[4320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffcbf8fd0 a2=0 a3=1 items=0 ppid=3827 pid=4320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:28.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:28.173000 audit[4320]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:28.173000 audit[4320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffcbf8fd0 a2=0 a3=1 items=0 ppid=3827 pid=4320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:28.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:28.276109 containerd[2086]: time="2025-12-16T12:16:28.276066552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:28.279765 containerd[2086]: time="2025-12-16T12:16:28.279724396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:28.282050 containerd[2086]: time="2025-12-16T12:16:28.282010170Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:28.284772 containerd[2086]: time="2025-12-16T12:16:28.284734263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:28.285115 containerd[2086]: time="2025-12-16T12:16:28.285090323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.155173002s" Dec 16 12:16:28.285115 containerd[2086]: time="2025-12-16T12:16:28.285116052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:16:28.287372 containerd[2086]: time="2025-12-16T12:16:28.287345560Z" level=info msg="CreateContainer within sandbox \"fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:16:28.306468 containerd[2086]: time="2025-12-16T12:16:28.304605707Z" level=info msg="Container 377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:28.319432 containerd[2086]: time="2025-12-16T12:16:28.319411835Z" level=info msg="CreateContainer within sandbox \"fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51\"" Dec 16 12:16:28.319968 containerd[2086]: time="2025-12-16T12:16:28.319945181Z" level=info msg="StartContainer for \"377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51\"" Dec 16 12:16:28.321780 containerd[2086]: time="2025-12-16T12:16:28.321756883Z" level=info msg="connecting to shim 377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51" address="unix:///run/containerd/s/dff5e552e2feba439f260a7f6233af7362952d1f48670c7f1fb6db3ce54c0170" protocol=ttrpc version=3 Dec 16 12:16:28.337592 systemd[1]: Started cri-containerd-377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51.scope - libcontainer container 377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51. Dec 16 12:16:28.369000 audit: BPF prog-id=190 op=LOAD Dec 16 12:16:28.369000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4169 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:28.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373535386163623964393436633531636234326664396134616534 Dec 16 12:16:28.370000 audit: BPF prog-id=191 op=LOAD Dec 16 12:16:28.370000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4169 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:28.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373535386163623964393436633531636234326664396134616534 Dec 16 12:16:28.370000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:16:28.370000 audit[4325]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:28.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373535386163623964393436633531636234326664396134616534 Dec 16 12:16:28.370000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:16:28.370000 audit[4325]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:28.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373535386163623964393436633531636234326664396134616534 Dec 16 12:16:28.370000 audit: BPF prog-id=192 op=LOAD Dec 16 12:16:28.370000 audit[4325]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4169 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:28.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337373535386163623964393436633531636234326664396134616534 Dec 16 12:16:28.395788 containerd[2086]: time="2025-12-16T12:16:28.395743864Z" level=info msg="StartContainer for \"377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51\" returns successfully" Dec 16 12:16:28.404328 systemd[1]: cri-containerd-377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51.scope: Deactivated successfully. Dec 16 12:16:28.407743 containerd[2086]: time="2025-12-16T12:16:28.407706199Z" level=info msg="received container exit event container_id:\"377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51\" id:\"377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51\" pid:4338 exited_at:{seconds:1765887388 nanos:407351859}" Dec 16 12:16:28.408000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:16:28.429238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-377558acb9d946c51cb42fd9a4ae4045324782f5bfac4f3b36ca8054380afe51-rootfs.mount: Deactivated successfully. Dec 16 12:16:28.982765 kubelet[3679]: E1216 12:16:28.982460 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:30.076729 containerd[2086]: time="2025-12-16T12:16:30.076655329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:16:30.983154 kubelet[3679]: E1216 12:16:30.982893 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:32.104386 containerd[2086]: time="2025-12-16T12:16:32.104343019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:32.106329 containerd[2086]: time="2025-12-16T12:16:32.106215027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:16:32.108609 containerd[2086]: time="2025-12-16T12:16:32.108586660Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:32.112243 containerd[2086]: time="2025-12-16T12:16:32.112138342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:32.112502 containerd[2086]: time="2025-12-16T12:16:32.112483482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.03578272s" Dec 16 12:16:32.112647 containerd[2086]: time="2025-12-16T12:16:32.112504275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:16:32.115501 containerd[2086]: time="2025-12-16T12:16:32.115473281Z" level=info msg="CreateContainer within sandbox \"fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:16:32.132413 containerd[2086]: time="2025-12-16T12:16:32.132384141Z" level=info msg="Container 7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:32.144996 containerd[2086]: time="2025-12-16T12:16:32.144966845Z" level=info msg="CreateContainer within sandbox \"fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8\"" Dec 16 12:16:32.146052 containerd[2086]: time="2025-12-16T12:16:32.145322233Z" level=info msg="StartContainer for \"7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8\"" Dec 16 12:16:32.146251 containerd[2086]: time="2025-12-16T12:16:32.146223592Z" level=info msg="connecting to shim 7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8" address="unix:///run/containerd/s/dff5e552e2feba439f260a7f6233af7362952d1f48670c7f1fb6db3ce54c0170" protocol=ttrpc version=3 Dec 16 12:16:32.163584 systemd[1]: Started cri-containerd-7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8.scope - libcontainer container 7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8. Dec 16 12:16:32.209181 kernel: kauditd_printk_skb: 90 callbacks suppressed Dec 16 12:16:32.209660 kernel: audit: type=1334 audit(1765887392.200:595): prog-id=193 op=LOAD Dec 16 12:16:32.209713 kernel: audit: type=1300 audit(1765887392.200:595): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4169 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:32.200000 audit: BPF prog-id=193 op=LOAD Dec 16 12:16:32.200000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4169 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:32.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663235373437613833643264613665653139336335623634636166 Dec 16 12:16:32.241226 kernel: audit: type=1327 audit(1765887392.200:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663235373437613833643264613665653139336335623634636166 Dec 16 12:16:32.208000 audit: BPF prog-id=194 op=LOAD Dec 16 12:16:32.245905 kernel: audit: type=1334 audit(1765887392.208:596): prog-id=194 op=LOAD Dec 16 12:16:32.208000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4169 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:32.261690 kernel: audit: type=1300 audit(1765887392.208:596): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4169 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:32.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663235373437613833643264613665653139336335623634636166 Dec 16 12:16:32.277558 kernel: audit: type=1327 audit(1765887392.208:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663235373437613833643264613665653139336335623634636166 Dec 16 12:16:32.208000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:16:32.282630 kernel: audit: type=1334 audit(1765887392.208:597): prog-id=194 op=UNLOAD Dec 16 12:16:32.208000 audit[4388]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:32.298932 kernel: audit: type=1300 audit(1765887392.208:597): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:32.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663235373437613833643264613665653139336335623634636166 Dec 16 12:16:32.315047 kernel: audit: type=1327 audit(1765887392.208:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663235373437613833643264613665653139336335623634636166 Dec 16 12:16:32.208000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:16:32.320285 kernel: audit: type=1334 audit(1765887392.208:598): prog-id=193 op=UNLOAD Dec 16 12:16:32.208000 audit[4388]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:32.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663235373437613833643264613665653139336335623634636166 Dec 16 12:16:32.208000 audit: BPF prog-id=195 op=LOAD Dec 16 12:16:32.208000 audit[4388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4169 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:32.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763663235373437613833643264613665653139336335623634636166 Dec 16 12:16:32.326473 containerd[2086]: time="2025-12-16T12:16:32.326388760Z" level=info msg="StartContainer for \"7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8\" returns successfully" Dec 16 12:16:32.984032 kubelet[3679]: E1216 12:16:32.982607 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:33.366165 containerd[2086]: time="2025-12-16T12:16:33.366116437Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:16:33.368478 systemd[1]: cri-containerd-7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8.scope: Deactivated successfully. Dec 16 12:16:33.368769 systemd[1]: cri-containerd-7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8.scope: Consumed 305ms CPU time, 187M memory peak, 165.9M written to disk. Dec 16 12:16:33.369512 containerd[2086]: time="2025-12-16T12:16:33.369485425Z" level=info msg="received container exit event container_id:\"7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8\" id:\"7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8\" pid:4401 exited_at:{seconds:1765887393 nanos:369080707}" Dec 16 12:16:33.372000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:16:33.385086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cf25747a83d2da6ee193c5b64caf729ecd0d5274cb94095708681c6030500c8-rootfs.mount: Deactivated successfully. Dec 16 12:16:33.447795 kubelet[3679]: I1216 12:16:33.447764 3679 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:16:33.768439 kubelet[3679]: I1216 12:16:33.558869 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce795cb-7cc9-42af-81d6-cb34fd295931-config\") pod \"goldmane-666569f655-gqbjj\" (UID: \"2ce795cb-7cc9-42af-81d6-cb34fd295931\") " pod="calico-system/goldmane-666569f655-gqbjj" Dec 16 12:16:33.768439 kubelet[3679]: I1216 12:16:33.558897 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4505b85b-9178-4a13-950e-42d575e3f166-config-volume\") pod \"coredns-668d6bf9bc-8kg2p\" (UID: \"4505b85b-9178-4a13-950e-42d575e3f166\") " pod="kube-system/coredns-668d6bf9bc-8kg2p" Dec 16 12:16:33.768439 kubelet[3679]: I1216 12:16:33.558912 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glc2p\" (UniqueName: \"kubernetes.io/projected/2ce795cb-7cc9-42af-81d6-cb34fd295931-kube-api-access-glc2p\") pod \"goldmane-666569f655-gqbjj\" (UID: \"2ce795cb-7cc9-42af-81d6-cb34fd295931\") " pod="calico-system/goldmane-666569f655-gqbjj" Dec 16 12:16:33.768439 kubelet[3679]: I1216 12:16:33.558923 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9cbc393d-9eb5-4eab-a130-293205187b74-calico-apiserver-certs\") pod \"calico-apiserver-7d6ff77cd6-97mm5\" (UID: \"9cbc393d-9eb5-4eab-a130-293205187b74\") " pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" Dec 16 12:16:33.768439 kubelet[3679]: I1216 12:16:33.558933 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9228ac-15bc-4471-a735-af33f7e24f95-config-volume\") pod \"coredns-668d6bf9bc-kgspd\" (UID: \"ce9228ac-15bc-4471-a735-af33f7e24f95\") " pod="kube-system/coredns-668d6bf9bc-kgspd" Dec 16 12:16:33.492750 systemd[1]: Created slice kubepods-burstable-pod4505b85b_9178_4a13_950e_42d575e3f166.slice - libcontainer container kubepods-burstable-pod4505b85b_9178_4a13_950e_42d575e3f166.slice. Dec 16 12:16:33.768681 kubelet[3679]: I1216 12:16:33.558945 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lr9r\" (UniqueName: \"kubernetes.io/projected/4505b85b-9178-4a13-950e-42d575e3f166-kube-api-access-4lr9r\") pod \"coredns-668d6bf9bc-8kg2p\" (UID: \"4505b85b-9178-4a13-950e-42d575e3f166\") " pod="kube-system/coredns-668d6bf9bc-8kg2p" Dec 16 12:16:33.768681 kubelet[3679]: I1216 12:16:33.558957 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9-tigera-ca-bundle\") pod \"calico-kube-controllers-db96bddb4-wqn8x\" (UID: \"e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9\") " pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" Dec 16 12:16:33.768681 kubelet[3679]: I1216 12:16:33.558968 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7dfe29-6011-451c-b4ed-7fa97679404d-whisker-ca-bundle\") pod \"whisker-5cbff6d4fc-8828n\" (UID: \"db7dfe29-6011-451c-b4ed-7fa97679404d\") " pod="calico-system/whisker-5cbff6d4fc-8828n" Dec 16 12:16:33.768681 kubelet[3679]: I1216 12:16:33.558978 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdnmv\" (UniqueName: \"kubernetes.io/projected/4bd414c4-3198-4659-bd4d-34927e106bf1-kube-api-access-jdnmv\") pod \"calico-apiserver-7d6ff77cd6-hb2wg\" (UID: \"4bd414c4-3198-4659-bd4d-34927e106bf1\") " pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" Dec 16 12:16:33.768681 kubelet[3679]: I1216 12:16:33.558989 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ce795cb-7cc9-42af-81d6-cb34fd295931-goldmane-ca-bundle\") pod \"goldmane-666569f655-gqbjj\" (UID: \"2ce795cb-7cc9-42af-81d6-cb34fd295931\") " pod="calico-system/goldmane-666569f655-gqbjj" Dec 16 12:16:33.500663 systemd[1]: Created slice kubepods-burstable-podce9228ac_15bc_4471_a735_af33f7e24f95.slice - libcontainer container kubepods-burstable-podce9228ac_15bc_4471_a735_af33f7e24f95.slice. Dec 16 12:16:33.768798 kubelet[3679]: I1216 12:16:33.558999 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59twm\" (UniqueName: \"kubernetes.io/projected/ce9228ac-15bc-4471-a735-af33f7e24f95-kube-api-access-59twm\") pod \"coredns-668d6bf9bc-kgspd\" (UID: \"ce9228ac-15bc-4471-a735-af33f7e24f95\") " pod="kube-system/coredns-668d6bf9bc-kgspd" Dec 16 12:16:33.768798 kubelet[3679]: I1216 12:16:33.559008 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69q8d\" (UniqueName: \"kubernetes.io/projected/db7dfe29-6011-451c-b4ed-7fa97679404d-kube-api-access-69q8d\") pod \"whisker-5cbff6d4fc-8828n\" (UID: \"db7dfe29-6011-451c-b4ed-7fa97679404d\") " pod="calico-system/whisker-5cbff6d4fc-8828n" Dec 16 12:16:33.768798 kubelet[3679]: I1216 12:16:33.559019 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbkwv\" (UniqueName: \"kubernetes.io/projected/9cbc393d-9eb5-4eab-a130-293205187b74-kube-api-access-tbkwv\") pod \"calico-apiserver-7d6ff77cd6-97mm5\" (UID: \"9cbc393d-9eb5-4eab-a130-293205187b74\") " pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" Dec 16 12:16:33.768798 kubelet[3679]: I1216 12:16:33.559035 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2ce795cb-7cc9-42af-81d6-cb34fd295931-goldmane-key-pair\") pod \"goldmane-666569f655-gqbjj\" (UID: \"2ce795cb-7cc9-42af-81d6-cb34fd295931\") " pod="calico-system/goldmane-666569f655-gqbjj" Dec 16 12:16:33.768798 kubelet[3679]: I1216 12:16:33.559050 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvf8x\" (UniqueName: \"kubernetes.io/projected/e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9-kube-api-access-cvf8x\") pod \"calico-kube-controllers-db96bddb4-wqn8x\" (UID: \"e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9\") " pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" Dec 16 12:16:33.509247 systemd[1]: Created slice kubepods-besteffort-pode5dc32c3_fc1f_40c7_8b8d_03d6cdb864e9.slice - libcontainer container kubepods-besteffort-pode5dc32c3_fc1f_40c7_8b8d_03d6cdb864e9.slice. Dec 16 12:16:33.768903 kubelet[3679]: I1216 12:16:33.559064 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db7dfe29-6011-451c-b4ed-7fa97679404d-whisker-backend-key-pair\") pod \"whisker-5cbff6d4fc-8828n\" (UID: \"db7dfe29-6011-451c-b4ed-7fa97679404d\") " pod="calico-system/whisker-5cbff6d4fc-8828n" Dec 16 12:16:33.768903 kubelet[3679]: I1216 12:16:33.559074 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bd414c4-3198-4659-bd4d-34927e106bf1-calico-apiserver-certs\") pod \"calico-apiserver-7d6ff77cd6-hb2wg\" (UID: \"4bd414c4-3198-4659-bd4d-34927e106bf1\") " pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" Dec 16 12:16:33.519333 systemd[1]: Created slice kubepods-besteffort-pod9cbc393d_9eb5_4eab_a130_293205187b74.slice - libcontainer container kubepods-besteffort-pod9cbc393d_9eb5_4eab_a130_293205187b74.slice. Dec 16 12:16:33.524735 systemd[1]: Created slice kubepods-besteffort-pod4bd414c4_3198_4659_bd4d_34927e106bf1.slice - libcontainer container kubepods-besteffort-pod4bd414c4_3198_4659_bd4d_34927e106bf1.slice. Dec 16 12:16:33.531156 systemd[1]: Created slice kubepods-besteffort-pod2ce795cb_7cc9_42af_81d6_cb34fd295931.slice - libcontainer container kubepods-besteffort-pod2ce795cb_7cc9_42af_81d6_cb34fd295931.slice. Dec 16 12:16:33.537200 systemd[1]: Created slice kubepods-besteffort-poddb7dfe29_6011_451c_b4ed_7fa97679404d.slice - libcontainer container kubepods-besteffort-poddb7dfe29_6011_451c_b4ed_7fa97679404d.slice. Dec 16 12:16:34.073079 containerd[2086]: time="2025-12-16T12:16:34.072777715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff77cd6-97mm5,Uid:9cbc393d-9eb5-4eab-a130-293205187b74,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:16:34.073079 containerd[2086]: time="2025-12-16T12:16:34.072839797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8kg2p,Uid:4505b85b-9178-4a13-950e-42d575e3f166,Namespace:kube-system,Attempt:0,}" Dec 16 12:16:34.073079 containerd[2086]: time="2025-12-16T12:16:34.072889599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-db96bddb4-wqn8x,Uid:e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:34.073079 containerd[2086]: time="2025-12-16T12:16:34.072938009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gqbjj,Uid:2ce795cb-7cc9-42af-81d6-cb34fd295931,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:34.073290 containerd[2086]: time="2025-12-16T12:16:34.072778131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kgspd,Uid:ce9228ac-15bc-4471-a735-af33f7e24f95,Namespace:kube-system,Attempt:0,}" Dec 16 12:16:34.074855 containerd[2086]: time="2025-12-16T12:16:34.074827082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff77cd6-hb2wg,Uid:4bd414c4-3198-4659-bd4d-34927e106bf1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:16:34.084733 containerd[2086]: time="2025-12-16T12:16:34.084606937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cbff6d4fc-8828n,Uid:db7dfe29-6011-451c-b4ed-7fa97679404d,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:34.285874 containerd[2086]: time="2025-12-16T12:16:34.285799970Z" level=error msg="Failed to destroy network for sandbox \"2595b56747e3ca0d74ff7f1e3e1031044432411fab0eda9f6dcf10ffd4cf11d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.306025 containerd[2086]: time="2025-12-16T12:16:34.305927101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff77cd6-97mm5,Uid:9cbc393d-9eb5-4eab-a130-293205187b74,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2595b56747e3ca0d74ff7f1e3e1031044432411fab0eda9f6dcf10ffd4cf11d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.306323 kubelet[3679]: E1216 12:16:34.306177 3679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2595b56747e3ca0d74ff7f1e3e1031044432411fab0eda9f6dcf10ffd4cf11d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.307584 kubelet[3679]: E1216 12:16:34.306426 3679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2595b56747e3ca0d74ff7f1e3e1031044432411fab0eda9f6dcf10ffd4cf11d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" Dec 16 12:16:34.307584 kubelet[3679]: E1216 12:16:34.306459 3679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2595b56747e3ca0d74ff7f1e3e1031044432411fab0eda9f6dcf10ffd4cf11d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" Dec 16 12:16:34.307584 kubelet[3679]: E1216 12:16:34.306502 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6ff77cd6-97mm5_calico-apiserver(9cbc393d-9eb5-4eab-a130-293205187b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6ff77cd6-97mm5_calico-apiserver(9cbc393d-9eb5-4eab-a130-293205187b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2595b56747e3ca0d74ff7f1e3e1031044432411fab0eda9f6dcf10ffd4cf11d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:16:34.382367 containerd[2086]: time="2025-12-16T12:16:34.382202839Z" level=error msg="Failed to destroy network for sandbox \"3998d5917ef72eef032fbe07d034dc8288f4c5a06cec6ca058879398ee1c4306\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.395524 containerd[2086]: time="2025-12-16T12:16:34.393412296Z" level=error msg="Failed to destroy network for sandbox \"364169d61ddee02d874b234fb73a62a946f5aaf80c5a6c2669f1d2cd2779e00b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.399407 systemd[1]: run-netns-cni\x2d522eb269\x2d05a8\x2da072\x2d4995\x2d17884e43be7b.mount: Deactivated successfully. Dec 16 12:16:34.401263 containerd[2086]: time="2025-12-16T12:16:34.401232908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gqbjj,Uid:2ce795cb-7cc9-42af-81d6-cb34fd295931,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3998d5917ef72eef032fbe07d034dc8288f4c5a06cec6ca058879398ee1c4306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.402349 kubelet[3679]: E1216 12:16:34.402195 3679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3998d5917ef72eef032fbe07d034dc8288f4c5a06cec6ca058879398ee1c4306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.402349 kubelet[3679]: E1216 12:16:34.402260 3679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3998d5917ef72eef032fbe07d034dc8288f4c5a06cec6ca058879398ee1c4306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gqbjj" Dec 16 12:16:34.402349 kubelet[3679]: E1216 12:16:34.402275 3679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3998d5917ef72eef032fbe07d034dc8288f4c5a06cec6ca058879398ee1c4306\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gqbjj" Dec 16 12:16:34.403444 kubelet[3679]: E1216 12:16:34.402315 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gqbjj_calico-system(2ce795cb-7cc9-42af-81d6-cb34fd295931)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gqbjj_calico-system(2ce795cb-7cc9-42af-81d6-cb34fd295931)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3998d5917ef72eef032fbe07d034dc8288f4c5a06cec6ca058879398ee1c4306\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:16:34.405978 containerd[2086]: time="2025-12-16T12:16:34.405950526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kgspd,Uid:ce9228ac-15bc-4471-a735-af33f7e24f95,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"364169d61ddee02d874b234fb73a62a946f5aaf80c5a6c2669f1d2cd2779e00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.406233 kubelet[3679]: E1216 12:16:34.406208 3679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"364169d61ddee02d874b234fb73a62a946f5aaf80c5a6c2669f1d2cd2779e00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.406296 kubelet[3679]: E1216 12:16:34.406272 3679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"364169d61ddee02d874b234fb73a62a946f5aaf80c5a6c2669f1d2cd2779e00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kgspd" Dec 16 12:16:34.406322 kubelet[3679]: E1216 12:16:34.406299 3679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"364169d61ddee02d874b234fb73a62a946f5aaf80c5a6c2669f1d2cd2779e00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kgspd" Dec 16 12:16:34.406361 kubelet[3679]: E1216 12:16:34.406325 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kgspd_kube-system(ce9228ac-15bc-4471-a735-af33f7e24f95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kgspd_kube-system(ce9228ac-15bc-4471-a735-af33f7e24f95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"364169d61ddee02d874b234fb73a62a946f5aaf80c5a6c2669f1d2cd2779e00b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kgspd" podUID="ce9228ac-15bc-4471-a735-af33f7e24f95" Dec 16 12:16:34.406401 containerd[2086]: time="2025-12-16T12:16:34.406035329Z" level=error msg="Failed to destroy network for sandbox \"3db3951bf1f89715ae74b70ddeb6f1e3bb1b4f2600503a47843fac81a6c5a229\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.406444 containerd[2086]: time="2025-12-16T12:16:34.406050106Z" level=error msg="Failed to destroy network for sandbox \"13e3c0a1aa2755fe1406c9d997fd4f509ab37f94d36ec9cbd24dac9e1d05c688\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.407277 containerd[2086]: time="2025-12-16T12:16:34.406070227Z" level=error msg="Failed to destroy network for sandbox \"59c7ca05ea00bc29ec4a776ec36aa3bd5690ea0c974d662d5975ec443a086670\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.409187 systemd[1]: run-netns-cni\x2dbe2da982\x2d7fbc\x2dfc0e\x2d5c70\x2ddb9a00ed1c2d.mount: Deactivated successfully. Dec 16 12:16:34.412411 systemd[1]: run-netns-cni\x2df12c7be5\x2df213\x2d6839\x2dfa47\x2d3a6aa2e277f7.mount: Deactivated successfully. Dec 16 12:16:34.412828 systemd[1]: run-netns-cni\x2d8a310baa\x2d53a7\x2de6e5\x2d25cc\x2d73e1607776d8.mount: Deactivated successfully. Dec 16 12:16:34.414697 containerd[2086]: time="2025-12-16T12:16:34.414592703Z" level=error msg="Failed to destroy network for sandbox \"44348f0dd919b652e6475df5ba3a9acf0581a319f10720bcda593c473ef1c19a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.415755 systemd[1]: run-netns-cni\x2d80fb93e2\x2d4957\x2db59a\x2d1c14\x2dd954bac284aa.mount: Deactivated successfully. Dec 16 12:16:34.419798 containerd[2086]: time="2025-12-16T12:16:34.419765265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-db96bddb4-wqn8x,Uid:e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c7ca05ea00bc29ec4a776ec36aa3bd5690ea0c974d662d5975ec443a086670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.420021 kubelet[3679]: E1216 12:16:34.420000 3679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c7ca05ea00bc29ec4a776ec36aa3bd5690ea0c974d662d5975ec443a086670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.420140 kubelet[3679]: E1216 12:16:34.420124 3679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c7ca05ea00bc29ec4a776ec36aa3bd5690ea0c974d662d5975ec443a086670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" Dec 16 12:16:34.420424 kubelet[3679]: E1216 12:16:34.420214 3679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c7ca05ea00bc29ec4a776ec36aa3bd5690ea0c974d662d5975ec443a086670\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" Dec 16 12:16:34.420424 kubelet[3679]: E1216 12:16:34.420267 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-db96bddb4-wqn8x_calico-system(e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-db96bddb4-wqn8x_calico-system(e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59c7ca05ea00bc29ec4a776ec36aa3bd5690ea0c974d662d5975ec443a086670\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:16:34.424106 containerd[2086]: time="2025-12-16T12:16:34.424068716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8kg2p,Uid:4505b85b-9178-4a13-950e-42d575e3f166,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db3951bf1f89715ae74b70ddeb6f1e3bb1b4f2600503a47843fac81a6c5a229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.424803 kubelet[3679]: E1216 12:16:34.424292 3679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db3951bf1f89715ae74b70ddeb6f1e3bb1b4f2600503a47843fac81a6c5a229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.424803 kubelet[3679]: E1216 12:16:34.424325 3679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db3951bf1f89715ae74b70ddeb6f1e3bb1b4f2600503a47843fac81a6c5a229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8kg2p" Dec 16 12:16:34.424803 kubelet[3679]: E1216 12:16:34.424337 3679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db3951bf1f89715ae74b70ddeb6f1e3bb1b4f2600503a47843fac81a6c5a229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8kg2p" Dec 16 12:16:34.424901 kubelet[3679]: E1216 12:16:34.424366 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8kg2p_kube-system(4505b85b-9178-4a13-950e-42d575e3f166)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8kg2p_kube-system(4505b85b-9178-4a13-950e-42d575e3f166)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3db3951bf1f89715ae74b70ddeb6f1e3bb1b4f2600503a47843fac81a6c5a229\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8kg2p" podUID="4505b85b-9178-4a13-950e-42d575e3f166" Dec 16 12:16:34.426716 containerd[2086]: time="2025-12-16T12:16:34.426681606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cbff6d4fc-8828n,Uid:db7dfe29-6011-451c-b4ed-7fa97679404d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e3c0a1aa2755fe1406c9d997fd4f509ab37f94d36ec9cbd24dac9e1d05c688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.426857 kubelet[3679]: E1216 12:16:34.426807 3679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e3c0a1aa2755fe1406c9d997fd4f509ab37f94d36ec9cbd24dac9e1d05c688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.426857 kubelet[3679]: E1216 12:16:34.426835 3679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e3c0a1aa2755fe1406c9d997fd4f509ab37f94d36ec9cbd24dac9e1d05c688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cbff6d4fc-8828n" Dec 16 12:16:34.426857 kubelet[3679]: E1216 12:16:34.426851 3679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13e3c0a1aa2755fe1406c9d997fd4f509ab37f94d36ec9cbd24dac9e1d05c688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cbff6d4fc-8828n" Dec 16 12:16:34.426924 kubelet[3679]: E1216 12:16:34.426886 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cbff6d4fc-8828n_calico-system(db7dfe29-6011-451c-b4ed-7fa97679404d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cbff6d4fc-8828n_calico-system(db7dfe29-6011-451c-b4ed-7fa97679404d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13e3c0a1aa2755fe1406c9d997fd4f509ab37f94d36ec9cbd24dac9e1d05c688\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cbff6d4fc-8828n" podUID="db7dfe29-6011-451c-b4ed-7fa97679404d" Dec 16 12:16:34.433259 containerd[2086]: time="2025-12-16T12:16:34.433183365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff77cd6-hb2wg,Uid:4bd414c4-3198-4659-bd4d-34927e106bf1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44348f0dd919b652e6475df5ba3a9acf0581a319f10720bcda593c473ef1c19a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.433347 kubelet[3679]: E1216 12:16:34.433309 3679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44348f0dd919b652e6475df5ba3a9acf0581a319f10720bcda593c473ef1c19a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:34.433347 kubelet[3679]: E1216 12:16:34.433341 3679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44348f0dd919b652e6475df5ba3a9acf0581a319f10720bcda593c473ef1c19a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" Dec 16 12:16:34.433393 kubelet[3679]: E1216 12:16:34.433352 3679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44348f0dd919b652e6475df5ba3a9acf0581a319f10720bcda593c473ef1c19a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" Dec 16 12:16:34.433416 kubelet[3679]: E1216 12:16:34.433374 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d6ff77cd6-hb2wg_calico-apiserver(4bd414c4-3198-4659-bd4d-34927e106bf1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d6ff77cd6-hb2wg_calico-apiserver(4bd414c4-3198-4659-bd4d-34927e106bf1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44348f0dd919b652e6475df5ba3a9acf0581a319f10720bcda593c473ef1c19a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:16:34.988180 systemd[1]: Created slice kubepods-besteffort-pod03762c75_9df3_49a2_9166_fb2b4578d7a1.slice - libcontainer container kubepods-besteffort-pod03762c75_9df3_49a2_9166_fb2b4578d7a1.slice. Dec 16 12:16:34.990212 containerd[2086]: time="2025-12-16T12:16:34.990182459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mk7sc,Uid:03762c75-9df3-49a2-9166-fb2b4578d7a1,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:35.026137 containerd[2086]: time="2025-12-16T12:16:35.026105332Z" level=error msg="Failed to destroy network for sandbox \"9bf12d9557fe0799e5edb82b76839e5fad4f71bcde7a195ffaa1141c2882e46b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:35.030757 containerd[2086]: time="2025-12-16T12:16:35.030728915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mk7sc,Uid:03762c75-9df3-49a2-9166-fb2b4578d7a1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf12d9557fe0799e5edb82b76839e5fad4f71bcde7a195ffaa1141c2882e46b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:35.030905 kubelet[3679]: E1216 12:16:35.030884 3679 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf12d9557fe0799e5edb82b76839e5fad4f71bcde7a195ffaa1141c2882e46b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:16:35.030984 kubelet[3679]: E1216 12:16:35.030921 3679 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf12d9557fe0799e5edb82b76839e5fad4f71bcde7a195ffaa1141c2882e46b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mk7sc" Dec 16 12:16:35.030984 kubelet[3679]: E1216 12:16:35.030935 3679 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf12d9557fe0799e5edb82b76839e5fad4f71bcde7a195ffaa1141c2882e46b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mk7sc" Dec 16 12:16:35.030984 kubelet[3679]: E1216 12:16:35.030969 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bf12d9557fe0799e5edb82b76839e5fad4f71bcde7a195ffaa1141c2882e46b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:35.090123 containerd[2086]: time="2025-12-16T12:16:35.090099560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:16:38.659781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708724607.mount: Deactivated successfully. Dec 16 12:16:39.665594 containerd[2086]: time="2025-12-16T12:16:39.665532298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:39.728923 containerd[2086]: time="2025-12-16T12:16:39.728778520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:16:39.732469 containerd[2086]: time="2025-12-16T12:16:39.731950051Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:39.735688 containerd[2086]: time="2025-12-16T12:16:39.735647680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:16:39.736054 containerd[2086]: time="2025-12-16T12:16:39.736033637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.645907964s" Dec 16 12:16:39.736143 containerd[2086]: time="2025-12-16T12:16:39.736130952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:16:39.777465 containerd[2086]: time="2025-12-16T12:16:39.777261348Z" level=info msg="CreateContainer within sandbox \"fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:16:40.133929 containerd[2086]: time="2025-12-16T12:16:40.133076697Z" level=info msg="Container b97fa1b6fc7eeb9b1423135bbf187784be9a84b3e55f53b33d5b76baa0d72fb0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:40.135370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931442847.mount: Deactivated successfully. Dec 16 12:16:40.149155 containerd[2086]: time="2025-12-16T12:16:40.149122630Z" level=info msg="CreateContainer within sandbox \"fe05887e9414a3e8b5d91adfbf51f84a964e1d7346fde376444f1c0a20c73eab\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b97fa1b6fc7eeb9b1423135bbf187784be9a84b3e55f53b33d5b76baa0d72fb0\"" Dec 16 12:16:40.149738 containerd[2086]: time="2025-12-16T12:16:40.149710834Z" level=info msg="StartContainer for \"b97fa1b6fc7eeb9b1423135bbf187784be9a84b3e55f53b33d5b76baa0d72fb0\"" Dec 16 12:16:40.151271 containerd[2086]: time="2025-12-16T12:16:40.151247630Z" level=info msg="connecting to shim b97fa1b6fc7eeb9b1423135bbf187784be9a84b3e55f53b33d5b76baa0d72fb0" address="unix:///run/containerd/s/dff5e552e2feba439f260a7f6233af7362952d1f48670c7f1fb6db3ce54c0170" protocol=ttrpc version=3 Dec 16 12:16:40.171593 systemd[1]: Started cri-containerd-b97fa1b6fc7eeb9b1423135bbf187784be9a84b3e55f53b33d5b76baa0d72fb0.scope - libcontainer container b97fa1b6fc7eeb9b1423135bbf187784be9a84b3e55f53b33d5b76baa0d72fb0. Dec 16 12:16:40.211000 audit: BPF prog-id=196 op=LOAD Dec 16 12:16:40.216360 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 12:16:40.216412 kernel: audit: type=1334 audit(1765887400.211:601): prog-id=196 op=LOAD Dec 16 12:16:40.211000 audit[4678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4169 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:40.236688 kernel: audit: type=1300 audit(1765887400.211:601): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4169 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:40.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376661316236666337656562396231343233313335626266313837 Dec 16 12:16:40.252747 kernel: audit: type=1327 audit(1765887400.211:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376661316236666337656562396231343233313335626266313837 Dec 16 12:16:40.217000 audit: BPF prog-id=197 op=LOAD Dec 16 12:16:40.257462 kernel: audit: type=1334 audit(1765887400.217:602): prog-id=197 op=LOAD Dec 16 12:16:40.217000 audit[4678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4169 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:40.273488 kernel: audit: type=1300 audit(1765887400.217:602): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4169 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:40.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376661316236666337656562396231343233313335626266313837 Dec 16 12:16:40.289941 kernel: audit: type=1327 audit(1765887400.217:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376661316236666337656562396231343233313335626266313837 Dec 16 12:16:40.219000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:16:40.294928 kernel: audit: type=1334 audit(1765887400.219:603): prog-id=197 op=UNLOAD Dec 16 12:16:40.219000 audit[4678]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:40.310255 kernel: audit: type=1300 audit(1765887400.219:603): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:40.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376661316236666337656562396231343233313335626266313837 Dec 16 12:16:40.326271 kernel: audit: type=1327 audit(1765887400.219:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376661316236666337656562396231343233313335626266313837 Dec 16 12:16:40.219000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:16:40.331047 kernel: audit: type=1334 audit(1765887400.219:604): prog-id=196 op=UNLOAD Dec 16 12:16:40.219000 audit[4678]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4169 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:40.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376661316236666337656562396231343233313335626266313837 Dec 16 12:16:40.219000 audit: BPF prog-id=198 op=LOAD Dec 16 12:16:40.219000 audit[4678]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4169 pid=4678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:40.219000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239376661316236666337656562396231343233313335626266313837 Dec 16 12:16:40.336795 containerd[2086]: time="2025-12-16T12:16:40.336762464Z" level=info msg="StartContainer for \"b97fa1b6fc7eeb9b1423135bbf187784be9a84b3e55f53b33d5b76baa0d72fb0\" returns successfully" Dec 16 12:16:40.517018 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:16:40.517114 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:16:40.702620 kubelet[3679]: I1216 12:16:40.702580 3679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db7dfe29-6011-451c-b4ed-7fa97679404d-whisker-backend-key-pair\") pod \"db7dfe29-6011-451c-b4ed-7fa97679404d\" (UID: \"db7dfe29-6011-451c-b4ed-7fa97679404d\") " Dec 16 12:16:40.702620 kubelet[3679]: I1216 12:16:40.702620 3679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7dfe29-6011-451c-b4ed-7fa97679404d-whisker-ca-bundle\") pod \"db7dfe29-6011-451c-b4ed-7fa97679404d\" (UID: \"db7dfe29-6011-451c-b4ed-7fa97679404d\") " Dec 16 12:16:40.702620 kubelet[3679]: I1216 12:16:40.702634 3679 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69q8d\" (UniqueName: \"kubernetes.io/projected/db7dfe29-6011-451c-b4ed-7fa97679404d-kube-api-access-69q8d\") pod \"db7dfe29-6011-451c-b4ed-7fa97679404d\" (UID: \"db7dfe29-6011-451c-b4ed-7fa97679404d\") " Dec 16 12:16:40.703920 kubelet[3679]: I1216 12:16:40.703843 3679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db7dfe29-6011-451c-b4ed-7fa97679404d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "db7dfe29-6011-451c-b4ed-7fa97679404d" (UID: "db7dfe29-6011-451c-b4ed-7fa97679404d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:16:40.707514 kubelet[3679]: I1216 12:16:40.707416 3679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db7dfe29-6011-451c-b4ed-7fa97679404d-kube-api-access-69q8d" (OuterVolumeSpecName: "kube-api-access-69q8d") pod "db7dfe29-6011-451c-b4ed-7fa97679404d" (UID: "db7dfe29-6011-451c-b4ed-7fa97679404d"). InnerVolumeSpecName "kube-api-access-69q8d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:16:40.707595 kubelet[3679]: I1216 12:16:40.707277 3679 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db7dfe29-6011-451c-b4ed-7fa97679404d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "db7dfe29-6011-451c-b4ed-7fa97679404d" (UID: "db7dfe29-6011-451c-b4ed-7fa97679404d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:16:40.741214 systemd[1]: var-lib-kubelet-pods-db7dfe29\x2d6011\x2d451c\x2db4ed\x2d7fa97679404d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:16:40.741294 systemd[1]: var-lib-kubelet-pods-db7dfe29\x2d6011\x2d451c\x2db4ed\x2d7fa97679404d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d69q8d.mount: Deactivated successfully. Dec 16 12:16:40.803502 kubelet[3679]: I1216 12:16:40.803227 3679 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db7dfe29-6011-451c-b4ed-7fa97679404d-whisker-backend-key-pair\") on node \"ci-4547.0.0-a-8648328498\" DevicePath \"\"" Dec 16 12:16:40.803502 kubelet[3679]: I1216 12:16:40.803266 3679 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db7dfe29-6011-451c-b4ed-7fa97679404d-whisker-ca-bundle\") on node \"ci-4547.0.0-a-8648328498\" DevicePath \"\"" Dec 16 12:16:40.803502 kubelet[3679]: I1216 12:16:40.803276 3679 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-69q8d\" (UniqueName: \"kubernetes.io/projected/db7dfe29-6011-451c-b4ed-7fa97679404d-kube-api-access-69q8d\") on node \"ci-4547.0.0-a-8648328498\" DevicePath \"\"" Dec 16 12:16:40.988046 systemd[1]: Removed slice kubepods-besteffort-poddb7dfe29_6011_451c_b4ed_7fa97679404d.slice - libcontainer container kubepods-besteffort-poddb7dfe29_6011_451c_b4ed_7fa97679404d.slice. Dec 16 12:16:41.142884 kubelet[3679]: I1216 12:16:41.142711 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7b48v" podStartSLOduration=2.07284788 podStartE2EDuration="16.142696719s" podCreationTimestamp="2025-12-16 12:16:25 +0000 UTC" firstStartedPulling="2025-12-16 12:16:25.667080388 +0000 UTC m=+18.760005937" lastFinishedPulling="2025-12-16 12:16:39.736929227 +0000 UTC m=+32.829854776" observedRunningTime="2025-12-16 12:16:41.126032989 +0000 UTC m=+34.218958538" watchObservedRunningTime="2025-12-16 12:16:41.142696719 +0000 UTC m=+34.235622268" Dec 16 12:16:41.196425 systemd[1]: Created slice kubepods-besteffort-poda5a3ab38_4175_48e7_a44e_56f2f099ecc8.slice - libcontainer container kubepods-besteffort-poda5a3ab38_4175_48e7_a44e_56f2f099ecc8.slice. Dec 16 12:16:41.305265 kubelet[3679]: I1216 12:16:41.305233 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5a3ab38-4175-48e7-a44e-56f2f099ecc8-whisker-backend-key-pair\") pod \"whisker-f8d759d65-5qwgg\" (UID: \"a5a3ab38-4175-48e7-a44e-56f2f099ecc8\") " pod="calico-system/whisker-f8d759d65-5qwgg" Dec 16 12:16:41.305265 kubelet[3679]: I1216 12:16:41.305273 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5a3ab38-4175-48e7-a44e-56f2f099ecc8-whisker-ca-bundle\") pod \"whisker-f8d759d65-5qwgg\" (UID: \"a5a3ab38-4175-48e7-a44e-56f2f099ecc8\") " pod="calico-system/whisker-f8d759d65-5qwgg" Dec 16 12:16:41.305413 kubelet[3679]: I1216 12:16:41.305289 3679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdwlh\" (UniqueName: \"kubernetes.io/projected/a5a3ab38-4175-48e7-a44e-56f2f099ecc8-kube-api-access-mdwlh\") pod \"whisker-f8d759d65-5qwgg\" (UID: \"a5a3ab38-4175-48e7-a44e-56f2f099ecc8\") " pod="calico-system/whisker-f8d759d65-5qwgg" Dec 16 12:16:41.500134 containerd[2086]: time="2025-12-16T12:16:41.500088129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8d759d65-5qwgg,Uid:a5a3ab38-4175-48e7-a44e-56f2f099ecc8,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:41.611477 systemd-networkd[1678]: cali8383a04f737: Link UP Dec 16 12:16:41.612160 systemd-networkd[1678]: cali8383a04f737: Gained carrier Dec 16 12:16:41.628469 containerd[2086]: 2025-12-16 12:16:41.520 [INFO][4739] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:16:41.628469 containerd[2086]: 2025-12-16 12:16:41.554 [INFO][4739] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0 whisker-f8d759d65- calico-system a5a3ab38-4175-48e7-a44e-56f2f099ecc8 896 0 2025-12-16 12:16:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f8d759d65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-a-8648328498 whisker-f8d759d65-5qwgg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8383a04f737 [] [] }} ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Namespace="calico-system" Pod="whisker-f8d759d65-5qwgg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-" Dec 16 12:16:41.628469 containerd[2086]: 2025-12-16 12:16:41.555 [INFO][4739] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Namespace="calico-system" Pod="whisker-f8d759d65-5qwgg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" Dec 16 12:16:41.628469 containerd[2086]: 2025-12-16 12:16:41.572 [INFO][4752] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" HandleID="k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Workload="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.572 [INFO][4752] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" HandleID="k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Workload="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-8648328498", "pod":"whisker-f8d759d65-5qwgg", "timestamp":"2025-12-16 12:16:41.572610864 +0000 UTC"}, Hostname:"ci-4547.0.0-a-8648328498", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.572 [INFO][4752] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.572 [INFO][4752] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.572 [INFO][4752] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-8648328498' Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.577 [INFO][4752] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.581 [INFO][4752] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.584 [INFO][4752] ipam/ipam.go 511: Trying affinity for 192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.586 [INFO][4752] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628631 containerd[2086]: 2025-12-16 12:16:41.587 [INFO][4752] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628765 containerd[2086]: 2025-12-16 12:16:41.587 [INFO][4752] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628765 containerd[2086]: 2025-12-16 12:16:41.589 [INFO][4752] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480 Dec 16 12:16:41.628765 containerd[2086]: 2025-12-16 12:16:41.593 [INFO][4752] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628765 containerd[2086]: 2025-12-16 12:16:41.603 [INFO][4752] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.65/26] block=192.168.37.64/26 handle="k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628765 containerd[2086]: 2025-12-16 12:16:41.603 [INFO][4752] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.65/26] handle="k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:41.628765 containerd[2086]: 2025-12-16 12:16:41.603 [INFO][4752] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:16:41.628765 containerd[2086]: 2025-12-16 12:16:41.603 [INFO][4752] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.65/26] IPv6=[] ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" HandleID="k8s-pod-network.21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Workload="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" Dec 16 12:16:41.628891 containerd[2086]: 2025-12-16 12:16:41.605 [INFO][4739] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Namespace="calico-system" Pod="whisker-f8d759d65-5qwgg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0", GenerateName:"whisker-f8d759d65-", Namespace:"calico-system", SelfLink:"", UID:"a5a3ab38-4175-48e7-a44e-56f2f099ecc8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f8d759d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"", Pod:"whisker-f8d759d65-5qwgg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8383a04f737", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:41.628891 containerd[2086]: 2025-12-16 12:16:41.605 [INFO][4739] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.65/32] ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Namespace="calico-system" Pod="whisker-f8d759d65-5qwgg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" Dec 16 12:16:41.628942 containerd[2086]: 2025-12-16 12:16:41.605 [INFO][4739] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8383a04f737 ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Namespace="calico-system" Pod="whisker-f8d759d65-5qwgg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" Dec 16 12:16:41.628942 containerd[2086]: 2025-12-16 12:16:41.611 [INFO][4739] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Namespace="calico-system" Pod="whisker-f8d759d65-5qwgg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" Dec 16 12:16:41.628971 containerd[2086]: 2025-12-16 12:16:41.611 [INFO][4739] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Namespace="calico-system" Pod="whisker-f8d759d65-5qwgg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0", GenerateName:"whisker-f8d759d65-", Namespace:"calico-system", SelfLink:"", UID:"a5a3ab38-4175-48e7-a44e-56f2f099ecc8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f8d759d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480", Pod:"whisker-f8d759d65-5qwgg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8383a04f737", MAC:"de:5f:83:40:e4:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:41.629006 containerd[2086]: 2025-12-16 12:16:41.626 [INFO][4739] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" Namespace="calico-system" Pod="whisker-f8d759d65-5qwgg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-whisker--f8d759d65--5qwgg-eth0" Dec 16 12:16:41.660284 containerd[2086]: time="2025-12-16T12:16:41.659764996Z" level=info msg="connecting to shim 21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480" address="unix:///run/containerd/s/dbc069bd791727c3063ece4020fdd427a04fd99a2da08a5b1f1d6de680edb084" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:41.681582 systemd[1]: Started cri-containerd-21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480.scope - libcontainer container 21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480. Dec 16 12:16:41.688000 audit: BPF prog-id=199 op=LOAD Dec 16 12:16:41.688000 audit: BPF prog-id=200 op=LOAD Dec 16 12:16:41.688000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4773 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:41.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663164363262643462646231373536303232356338613162343561 Dec 16 12:16:41.688000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:16:41.688000 audit[4786]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4773 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:41.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663164363262643462646231373536303232356338613162343561 Dec 16 12:16:41.688000 audit: BPF prog-id=201 op=LOAD Dec 16 12:16:41.688000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4773 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:41.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663164363262643462646231373536303232356338613162343561 Dec 16 12:16:41.688000 audit: BPF prog-id=202 op=LOAD Dec 16 12:16:41.688000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4773 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:41.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663164363262643462646231373536303232356338613162343561 Dec 16 12:16:41.688000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:16:41.688000 audit[4786]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4773 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:41.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663164363262643462646231373536303232356338613162343561 Dec 16 12:16:41.689000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:16:41.689000 audit[4786]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4773 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:41.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663164363262643462646231373536303232356338613162343561 Dec 16 12:16:41.689000 audit: BPF prog-id=203 op=LOAD Dec 16 12:16:41.689000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4773 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:41.689000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231663164363262643462646231373536303232356338613162343561 Dec 16 12:16:41.710780 containerd[2086]: time="2025-12-16T12:16:41.710702243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f8d759d65-5qwgg,Uid:a5a3ab38-4175-48e7-a44e-56f2f099ecc8,Namespace:calico-system,Attempt:0,} returns sandbox id \"21f1d62bd4bdb17560225c8a1b45aecf4e106bcdfaad2f7cbf607a228f440480\"" Dec 16 12:16:41.712393 containerd[2086]: time="2025-12-16T12:16:41.712271616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:16:41.999844 containerd[2086]: time="2025-12-16T12:16:41.999803333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:42.003614 containerd[2086]: time="2025-12-16T12:16:42.003575996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:16:42.003677 containerd[2086]: time="2025-12-16T12:16:42.003639318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:42.003830 kubelet[3679]: E1216 12:16:42.003771 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:42.003830 kubelet[3679]: E1216 12:16:42.003822 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:42.010259 kubelet[3679]: E1216 12:16:42.010220 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fc68e7e88824b03a0c3d3e5f42811b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d759d65-5qwgg_calico-system(a5a3ab38-4175-48e7-a44e-56f2f099ecc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:42.012617 containerd[2086]: time="2025-12-16T12:16:42.012589964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:16:42.091000 audit: BPF prog-id=204 op=LOAD Dec 16 12:16:42.091000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb802ed8 a2=98 a3=ffffeb802ec8 items=0 ppid=4828 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.091000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:16:42.091000 audit: BPF prog-id=204 op=UNLOAD Dec 16 12:16:42.091000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffeb802ea8 a3=0 items=0 ppid=4828 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.091000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:16:42.091000 audit: BPF prog-id=205 op=LOAD Dec 16 12:16:42.091000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb802d88 a2=74 a3=95 items=0 ppid=4828 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.091000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:16:42.091000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:16:42.091000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4828 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.091000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:16:42.091000 audit: BPF prog-id=206 op=LOAD Dec 16 12:16:42.091000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb802db8 a2=40 a3=ffffeb802de8 items=0 ppid=4828 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.091000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:16:42.091000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:16:42.091000 audit[4932]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffeb802de8 items=0 ppid=4828 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.091000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:16:42.093000 audit: BPF prog-id=207 op=LOAD Dec 16 12:16:42.093000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe6c862e8 a2=98 a3=ffffe6c862d8 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.093000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.093000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:16:42.093000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe6c862b8 a3=0 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.093000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.093000 audit: BPF prog-id=208 op=LOAD Dec 16 12:16:42.093000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe6c85f78 a2=74 a3=95 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.093000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.093000 audit: BPF prog-id=208 op=UNLOAD Dec 16 12:16:42.093000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.093000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.093000 audit: BPF prog-id=209 op=LOAD Dec 16 12:16:42.093000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe6c85fd8 a2=94 a3=2 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.093000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.093000 audit: BPF prog-id=209 op=UNLOAD Dec 16 12:16:42.093000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.093000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.170000 audit: BPF prog-id=210 op=LOAD Dec 16 12:16:42.170000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe6c85f98 a2=40 a3=ffffe6c85fc8 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.170000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.171000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:16:42.171000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe6c85fc8 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.176000 audit: BPF prog-id=211 op=LOAD Dec 16 12:16:42.176000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe6c85fa8 a2=94 a3=4 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.176000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.177000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:16:42.177000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.177000 audit: BPF prog-id=212 op=LOAD Dec 16 12:16:42.177000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe6c85de8 a2=94 a3=5 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.177000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:16:42.177000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.177000 audit: BPF prog-id=213 op=LOAD Dec 16 12:16:42.177000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe6c86018 a2=94 a3=6 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.177000 audit: BPF prog-id=213 op=UNLOAD Dec 16 12:16:42.177000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.177000 audit: BPF prog-id=214 op=LOAD Dec 16 12:16:42.177000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe6c857e8 a2=94 a3=83 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.177000 audit: BPF prog-id=215 op=LOAD Dec 16 12:16:42.177000 audit[4933]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe6c855a8 a2=94 a3=2 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.177000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:16:42.177000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.177000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.178000 audit: BPF prog-id=214 op=UNLOAD Dec 16 12:16:42.178000 audit[4933]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=5aeb620 a3=5adeb00 items=0 ppid=4828 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.178000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:16:42.183000 audit: BPF prog-id=216 op=LOAD Dec 16 12:16:42.183000 audit[4936]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1880538 a2=98 a3=ffffd1880528 items=0 ppid=4828 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.183000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:16:42.183000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:16:42.183000 audit[4936]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd1880508 a3=0 items=0 ppid=4828 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.183000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:16:42.183000 audit: BPF prog-id=217 op=LOAD Dec 16 12:16:42.183000 audit[4936]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd18803e8 a2=74 a3=95 items=0 ppid=4828 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.183000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:16:42.183000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:16:42.183000 audit[4936]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4828 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.183000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:16:42.183000 audit: BPF prog-id=218 op=LOAD Dec 16 12:16:42.183000 audit[4936]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd1880418 a2=40 a3=ffffd1880448 items=0 ppid=4828 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.183000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:16:42.183000 audit: BPF prog-id=218 op=UNLOAD Dec 16 12:16:42.183000 audit[4936]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd1880448 items=0 ppid=4828 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.183000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:16:42.281723 systemd-networkd[1678]: vxlan.calico: Link UP Dec 16 12:16:42.281729 systemd-networkd[1678]: vxlan.calico: Gained carrier Dec 16 12:16:42.291000 audit: BPF prog-id=219 op=LOAD Dec 16 12:16:42.291000 audit[4965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffd860dd8 a2=98 a3=fffffd860dc8 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.291000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.291000 audit: BPF prog-id=219 op=UNLOAD Dec 16 12:16:42.291000 audit[4965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffd860da8 a3=0 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.291000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.291000 audit: BPF prog-id=220 op=LOAD Dec 16 12:16:42.291000 audit[4965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffd860ab8 a2=74 a3=95 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.291000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.291000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:16:42.291000 audit[4965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.291000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.291000 audit: BPF prog-id=221 op=LOAD Dec 16 12:16:42.291000 audit[4965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffd860b18 a2=94 a3=2 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.291000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.292000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:16:42.292000 audit[4965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.292000 audit: BPF prog-id=222 op=LOAD Dec 16 12:16:42.292000 audit[4965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffd860998 a2=40 a3=fffffd8609c8 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.292000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:16:42.292000 audit[4965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffffd8609c8 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.292000 audit: BPF prog-id=223 op=LOAD Dec 16 12:16:42.292000 audit[4965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffd860ae8 a2=94 a3=b7 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.292000 audit: BPF prog-id=223 op=UNLOAD Dec 16 12:16:42.292000 audit[4965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.294000 audit: BPF prog-id=224 op=LOAD Dec 16 12:16:42.294000 audit[4965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffd860198 a2=94 a3=2 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.296000 audit: BPF prog-id=224 op=UNLOAD Dec 16 12:16:42.296000 audit[4965]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.296000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.296000 audit: BPF prog-id=225 op=LOAD Dec 16 12:16:42.296000 audit[4965]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffd860328 a2=94 a3=30 items=0 ppid=4828 pid=4965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.296000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:16:42.300000 audit: BPF prog-id=226 op=LOAD Dec 16 12:16:42.300000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3bf3528 a2=98 a3=ffffe3bf3518 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.300000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:16:42.300000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe3bf34f8 a3=0 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.300000 audit: BPF prog-id=227 op=LOAD Dec 16 12:16:42.300000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3bf31b8 a2=74 a3=95 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.300000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:16:42.300000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.300000 audit: BPF prog-id=228 op=LOAD Dec 16 12:16:42.300000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3bf3218 a2=94 a3=2 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.300000 audit: BPF prog-id=228 op=UNLOAD Dec 16 12:16:42.300000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.300000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.314693 containerd[2086]: time="2025-12-16T12:16:42.314603034Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:42.371342 containerd[2086]: time="2025-12-16T12:16:42.371188991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:16:42.371342 containerd[2086]: time="2025-12-16T12:16:42.371196335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:42.371989 kubelet[3679]: E1216 12:16:42.371480 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:42.371989 kubelet[3679]: E1216 12:16:42.371526 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:42.372065 kubelet[3679]: E1216 12:16:42.371627 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d759d65-5qwgg_calico-system(a5a3ab38-4175-48e7-a44e-56f2f099ecc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:42.373068 kubelet[3679]: E1216 12:16:42.373007 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:16:42.384000 audit: BPF prog-id=229 op=LOAD Dec 16 12:16:42.384000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe3bf31d8 a2=40 a3=ffffe3bf3208 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.384000 audit: BPF prog-id=229 op=UNLOAD Dec 16 12:16:42.384000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe3bf3208 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.384000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.391000 audit: BPF prog-id=230 op=LOAD Dec 16 12:16:42.391000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe3bf31e8 a2=94 a3=4 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.391000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:16:42.391000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.391000 audit: BPF prog-id=231 op=LOAD Dec 16 12:16:42.391000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe3bf3028 a2=94 a3=5 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.391000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:16:42.391000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.391000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.392000 audit: BPF prog-id=232 op=LOAD Dec 16 12:16:42.392000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe3bf3258 a2=94 a3=6 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.392000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:16:42.392000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.392000 audit: BPF prog-id=233 op=LOAD Dec 16 12:16:42.392000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe3bf2a28 a2=94 a3=83 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.392000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.393000 audit: BPF prog-id=234 op=LOAD Dec 16 12:16:42.393000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe3bf27e8 a2=94 a3=2 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.393000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.393000 audit: BPF prog-id=234 op=UNLOAD Dec 16 12:16:42.393000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.393000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.393000 audit: BPF prog-id=233 op=UNLOAD Dec 16 12:16:42.393000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3bedd620 a3=3bed0b00 items=0 ppid=4828 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.393000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:16:42.399000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:16:42.399000 audit[4828]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40007da3c0 a2=0 a3=0 items=0 ppid=4817 pid=4828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.399000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:16:42.466000 audit[4994]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4994 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:42.466000 audit[4994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc038eb50 a2=0 a3=ffff8cae0fa8 items=0 ppid=4828 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.467000 audit[4996]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:42.466000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:42.467000 audit[4996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe4bb2400 a2=0 a3=ffff932b1fa8 items=0 ppid=4828 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.467000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:42.469000 audit[4993]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=4993 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:42.469000 audit[4993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcbe46b50 a2=0 a3=ffffa1fc6fa8 items=0 ppid=4828 pid=4993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.469000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:42.490000 audit[5000]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5000 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:42.490000 audit[5000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffd7c69720 a2=0 a3=ffffb31defa8 items=0 ppid=4828 pid=5000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:42.490000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:42.847608 systemd-networkd[1678]: cali8383a04f737: Gained IPv6LL Dec 16 12:16:42.984367 kubelet[3679]: I1216 12:16:42.984130 3679 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db7dfe29-6011-451c-b4ed-7fa97679404d" path="/var/lib/kubelet/pods/db7dfe29-6011-451c-b4ed-7fa97679404d/volumes" Dec 16 12:16:43.111968 kubelet[3679]: E1216 12:16:43.111858 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:16:43.137000 audit[5010]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5010 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:43.137000 audit[5010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe3d54a40 a2=0 a3=1 items=0 ppid=3827 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:43.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:43.146000 audit[5010]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5010 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:43.146000 audit[5010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe3d54a40 a2=0 a3=1 items=0 ppid=3827 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:43.146000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:44.191587 systemd-networkd[1678]: vxlan.calico: Gained IPv6LL Dec 16 12:16:44.983570 containerd[2086]: time="2025-12-16T12:16:44.983348401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff77cd6-97mm5,Uid:9cbc393d-9eb5-4eab-a130-293205187b74,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:16:45.070282 systemd-networkd[1678]: cali469d61308f2: Link UP Dec 16 12:16:45.070954 systemd-networkd[1678]: cali469d61308f2: Gained carrier Dec 16 12:16:45.086312 containerd[2086]: 2025-12-16 12:16:45.017 [INFO][5013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0 calico-apiserver-7d6ff77cd6- calico-apiserver 9cbc393d-9eb5-4eab-a130-293205187b74 829 0 2025-12-16 12:16:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6ff77cd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-8648328498 calico-apiserver-7d6ff77cd6-97mm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali469d61308f2 [] [] }} ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-97mm5" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-" Dec 16 12:16:45.086312 containerd[2086]: 2025-12-16 12:16:45.017 [INFO][5013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-97mm5" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" Dec 16 12:16:45.086312 containerd[2086]: 2025-12-16 12:16:45.034 [INFO][5025] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" HandleID="k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Workload="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.034 [INFO][5025] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" HandleID="k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Workload="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-8648328498", "pod":"calico-apiserver-7d6ff77cd6-97mm5", "timestamp":"2025-12-16 12:16:45.034063369 +0000 UTC"}, Hostname:"ci-4547.0.0-a-8648328498", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.034 [INFO][5025] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.034 [INFO][5025] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.034 [INFO][5025] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-8648328498' Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.038 [INFO][5025] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.042 [INFO][5025] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.046 [INFO][5025] ipam/ipam.go 511: Trying affinity for 192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.047 [INFO][5025] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086536 containerd[2086]: 2025-12-16 12:16:45.049 [INFO][5025] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086701 containerd[2086]: 2025-12-16 12:16:45.049 [INFO][5025] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086701 containerd[2086]: 2025-12-16 12:16:45.050 [INFO][5025] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7 Dec 16 12:16:45.086701 containerd[2086]: 2025-12-16 12:16:45.055 [INFO][5025] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086701 containerd[2086]: 2025-12-16 12:16:45.064 [INFO][5025] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.66/26] block=192.168.37.64/26 handle="k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086701 containerd[2086]: 2025-12-16 12:16:45.064 [INFO][5025] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.66/26] handle="k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:45.086701 containerd[2086]: 2025-12-16 12:16:45.064 [INFO][5025] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:16:45.086701 containerd[2086]: 2025-12-16 12:16:45.064 [INFO][5025] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.66/26] IPv6=[] ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" HandleID="k8s-pod-network.556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Workload="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" Dec 16 12:16:45.086793 containerd[2086]: 2025-12-16 12:16:45.066 [INFO][5013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-97mm5" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0", GenerateName:"calico-apiserver-7d6ff77cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cbc393d-9eb5-4eab-a130-293205187b74", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff77cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"", Pod:"calico-apiserver-7d6ff77cd6-97mm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali469d61308f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:45.086830 containerd[2086]: 2025-12-16 12:16:45.067 [INFO][5013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.66/32] ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-97mm5" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" Dec 16 12:16:45.086830 containerd[2086]: 2025-12-16 12:16:45.067 [INFO][5013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali469d61308f2 ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-97mm5" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" Dec 16 12:16:45.086830 containerd[2086]: 2025-12-16 12:16:45.071 [INFO][5013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-97mm5" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" Dec 16 12:16:45.086873 containerd[2086]: 2025-12-16 12:16:45.071 [INFO][5013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-97mm5" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0", GenerateName:"calico-apiserver-7d6ff77cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"9cbc393d-9eb5-4eab-a130-293205187b74", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff77cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7", Pod:"calico-apiserver-7d6ff77cd6-97mm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali469d61308f2", MAC:"66:43:e6:2b:18:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:45.086908 containerd[2086]: 2025-12-16 12:16:45.083 [INFO][5013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-97mm5" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--97mm5-eth0" Dec 16 12:16:45.095000 audit[5038]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=5038 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:45.095000 audit[5038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffd56b85a0 a2=0 a3=ffffa54bdfa8 items=0 ppid=4828 pid=5038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:45.095000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:45.116854 containerd[2086]: time="2025-12-16T12:16:45.116826771Z" level=info msg="connecting to shim 556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7" address="unix:///run/containerd/s/006bb9b9b0b0173b0314d5a095dab44b3b8dc5842ff99a657459440ffd7f6855" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:45.144593 systemd[1]: Started cri-containerd-556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7.scope - libcontainer container 556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7. Dec 16 12:16:45.150000 audit: BPF prog-id=235 op=LOAD Dec 16 12:16:45.151000 audit: BPF prog-id=236 op=LOAD Dec 16 12:16:45.151000 audit[5058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5046 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:45.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535366163396362643262643331383435643465383135336366313863 Dec 16 12:16:45.151000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:16:45.151000 audit[5058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:45.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535366163396362643262643331383435643465383135336366313863 Dec 16 12:16:45.151000 audit: BPF prog-id=237 op=LOAD Dec 16 12:16:45.151000 audit[5058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5046 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:45.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535366163396362643262643331383435643465383135336366313863 Dec 16 12:16:45.151000 audit: BPF prog-id=238 op=LOAD Dec 16 12:16:45.151000 audit[5058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5046 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:45.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535366163396362643262643331383435643465383135336366313863 Dec 16 12:16:45.151000 audit: BPF prog-id=238 op=UNLOAD Dec 16 12:16:45.151000 audit[5058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:45.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535366163396362643262643331383435643465383135336366313863 Dec 16 12:16:45.151000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:16:45.151000 audit[5058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5046 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:45.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535366163396362643262643331383435643465383135336366313863 Dec 16 12:16:45.151000 audit: BPF prog-id=239 op=LOAD Dec 16 12:16:45.151000 audit[5058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5046 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:45.151000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535366163396362643262643331383435643465383135336366313863 Dec 16 12:16:45.174203 containerd[2086]: time="2025-12-16T12:16:45.174059721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff77cd6-97mm5,Uid:9cbc393d-9eb5-4eab-a130-293205187b74,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"556ac9cbd2bd31845d4e8153cf18c156e22f39e40f1630e6f5d624d00e5e90f7\"" Dec 16 12:16:45.176684 containerd[2086]: time="2025-12-16T12:16:45.176659625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:45.478648 containerd[2086]: time="2025-12-16T12:16:45.478528722Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:45.480893 containerd[2086]: time="2025-12-16T12:16:45.480820896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:45.480893 containerd[2086]: time="2025-12-16T12:16:45.480853321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:45.481076 kubelet[3679]: E1216 12:16:45.481040 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:45.481603 kubelet[3679]: E1216 12:16:45.481380 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:45.482374 kubelet[3679]: E1216 12:16:45.482313 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbkwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d6ff77cd6-97mm5_calico-apiserver(9cbc393d-9eb5-4eab-a130-293205187b74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:45.483467 kubelet[3679]: E1216 12:16:45.483423 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:16:45.983472 containerd[2086]: time="2025-12-16T12:16:45.983397108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mk7sc,Uid:03762c75-9df3-49a2-9166-fb2b4578d7a1,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:45.983472 containerd[2086]: time="2025-12-16T12:16:45.983397132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8kg2p,Uid:4505b85b-9178-4a13-950e-42d575e3f166,Namespace:kube-system,Attempt:0,}" Dec 16 12:16:46.099676 systemd-networkd[1678]: calif67d07088b0: Link UP Dec 16 12:16:46.100509 systemd-networkd[1678]: calif67d07088b0: Gained carrier Dec 16 12:16:46.116701 containerd[2086]: 2025-12-16 12:16:46.031 [INFO][5084] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0 csi-node-driver- calico-system 03762c75-9df3-49a2-9166-fb2b4578d7a1 720 0 2025-12-16 12:16:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-a-8648328498 csi-node-driver-mk7sc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif67d07088b0 [] [] }} ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Namespace="calico-system" Pod="csi-node-driver-mk7sc" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-" Dec 16 12:16:46.116701 containerd[2086]: 2025-12-16 12:16:46.032 [INFO][5084] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Namespace="calico-system" Pod="csi-node-driver-mk7sc" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" Dec 16 12:16:46.116701 containerd[2086]: 2025-12-16 12:16:46.058 [INFO][5112] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" HandleID="k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Workload="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.058 [INFO][5112] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" HandleID="k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Workload="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-8648328498", "pod":"csi-node-driver-mk7sc", "timestamp":"2025-12-16 12:16:46.058088812 +0000 UTC"}, Hostname:"ci-4547.0.0-a-8648328498", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.058 [INFO][5112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.058 [INFO][5112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.058 [INFO][5112] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-8648328498' Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.064 [INFO][5112] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.068 [INFO][5112] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.071 [INFO][5112] ipam/ipam.go 511: Trying affinity for 192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.073 [INFO][5112] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.116883 containerd[2086]: 2025-12-16 12:16:46.075 [INFO][5112] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.117352 containerd[2086]: 2025-12-16 12:16:46.075 [INFO][5112] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.117352 containerd[2086]: 2025-12-16 12:16:46.077 [INFO][5112] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9 Dec 16 12:16:46.117352 containerd[2086]: 2025-12-16 12:16:46.082 [INFO][5112] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.117352 containerd[2086]: 2025-12-16 12:16:46.090 [INFO][5112] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.67/26] block=192.168.37.64/26 handle="k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.117352 containerd[2086]: 2025-12-16 12:16:46.090 [INFO][5112] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.67/26] handle="k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.117352 containerd[2086]: 2025-12-16 12:16:46.090 [INFO][5112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:16:46.117352 containerd[2086]: 2025-12-16 12:16:46.091 [INFO][5112] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.67/26] IPv6=[] ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" HandleID="k8s-pod-network.d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Workload="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" Dec 16 12:16:46.117717 containerd[2086]: 2025-12-16 12:16:46.094 [INFO][5084] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Namespace="calico-system" Pod="csi-node-driver-mk7sc" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03762c75-9df3-49a2-9166-fb2b4578d7a1", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"", Pod:"csi-node-driver-mk7sc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif67d07088b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:46.117770 containerd[2086]: 2025-12-16 12:16:46.094 [INFO][5084] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.67/32] ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Namespace="calico-system" Pod="csi-node-driver-mk7sc" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" Dec 16 12:16:46.117770 containerd[2086]: 2025-12-16 12:16:46.094 [INFO][5084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif67d07088b0 ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Namespace="calico-system" Pod="csi-node-driver-mk7sc" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" Dec 16 12:16:46.117770 containerd[2086]: 2025-12-16 12:16:46.100 [INFO][5084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Namespace="calico-system" Pod="csi-node-driver-mk7sc" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" Dec 16 12:16:46.118011 containerd[2086]: 2025-12-16 12:16:46.101 [INFO][5084] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Namespace="calico-system" Pod="csi-node-driver-mk7sc" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"03762c75-9df3-49a2-9166-fb2b4578d7a1", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9", Pod:"csi-node-driver-mk7sc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif67d07088b0", MAC:"f6:0b:d5:2d:7b:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:46.118066 containerd[2086]: 2025-12-16 12:16:46.113 [INFO][5084] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" Namespace="calico-system" Pod="csi-node-driver-mk7sc" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-csi--node--driver--mk7sc-eth0" Dec 16 12:16:46.122570 kubelet[3679]: E1216 12:16:46.122362 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:16:46.136566 kernel: kauditd_printk_skb: 256 callbacks suppressed Dec 16 12:16:46.136698 kernel: audit: type=1325 audit(1765887406.131:691): table=filter:129 family=2 entries=40 op=nft_register_chain pid=5129 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:46.131000 audit[5129]: NETFILTER_CFG table=filter:129 family=2 entries=40 op=nft_register_chain pid=5129 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:46.131000 audit[5129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=fffff8653710 a2=0 a3=ffffb862efa8 items=0 ppid=4828 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.162800 kernel: audit: type=1300 audit(1765887406.131:691): arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=fffff8653710 a2=0 a3=ffffb862efa8 items=0 ppid=4828 pid=5129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.131000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:46.177241 kernel: audit: type=1327 audit(1765887406.131:691): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:46.177000 audit[5131]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:46.177000 audit[5131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeef68720 a2=0 a3=1 items=0 ppid=3827 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.205771 kernel: audit: type=1325 audit(1765887406.177:692): table=filter:130 family=2 entries=20 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:46.206158 kernel: audit: type=1300 audit(1765887406.177:692): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeef68720 a2=0 a3=1 items=0 ppid=3827 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:46.214781 kernel: audit: type=1327 audit(1765887406.177:692): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:46.187000 audit[5131]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:46.224513 kernel: audit: type=1325 audit(1765887406.187:693): table=nat:131 family=2 entries=14 op=nft_register_rule pid=5131 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:46.187000 audit[5131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffeef68720 a2=0 a3=1 items=0 ppid=3827 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.242329 kernel: audit: type=1300 audit(1765887406.187:693): arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffeef68720 a2=0 a3=1 items=0 ppid=3827 pid=5131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:46.251663 kernel: audit: type=1327 audit(1765887406.187:693): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:46.253686 containerd[2086]: time="2025-12-16T12:16:46.253619375Z" level=info msg="connecting to shim d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9" address="unix:///run/containerd/s/cfe9eec595a38a544191e7fa00ab0157a4a5ad6dbed9f3965ca974654f0d2265" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:46.260404 systemd-networkd[1678]: cali3b6bb02881b: Link UP Dec 16 12:16:46.261160 systemd-networkd[1678]: cali3b6bb02881b: Gained carrier Dec 16 12:16:46.283060 containerd[2086]: 2025-12-16 12:16:46.033 [INFO][5088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0 coredns-668d6bf9bc- kube-system 4505b85b-9178-4a13-950e-42d575e3f166 822 0 2025-12-16 12:16:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-8648328498 coredns-668d6bf9bc-8kg2p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b6bb02881b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Namespace="kube-system" Pod="coredns-668d6bf9bc-8kg2p" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-" Dec 16 12:16:46.283060 containerd[2086]: 2025-12-16 12:16:46.033 [INFO][5088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Namespace="kube-system" Pod="coredns-668d6bf9bc-8kg2p" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" Dec 16 12:16:46.283060 containerd[2086]: 2025-12-16 12:16:46.064 [INFO][5107] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" HandleID="k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Workload="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" Dec 16 12:16:46.282911 systemd[1]: Started cri-containerd-d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9.scope - libcontainer container d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9. Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.064 [INFO][5107] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" HandleID="k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Workload="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b230), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-8648328498", "pod":"coredns-668d6bf9bc-8kg2p", "timestamp":"2025-12-16 12:16:46.064233413 +0000 UTC"}, Hostname:"ci-4547.0.0-a-8648328498", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.064 [INFO][5107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.090 [INFO][5107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.091 [INFO][5107] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-8648328498' Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.165 [INFO][5107] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.178 [INFO][5107] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.182 [INFO][5107] ipam/ipam.go 511: Trying affinity for 192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.188 [INFO][5107] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283364 containerd[2086]: 2025-12-16 12:16:46.190 [INFO][5107] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283539 containerd[2086]: 2025-12-16 12:16:46.190 [INFO][5107] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283539 containerd[2086]: 2025-12-16 12:16:46.206 [INFO][5107] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1 Dec 16 12:16:46.283539 containerd[2086]: 2025-12-16 12:16:46.216 [INFO][5107] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283539 containerd[2086]: 2025-12-16 12:16:46.246 [INFO][5107] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.68/26] block=192.168.37.64/26 handle="k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283539 containerd[2086]: 2025-12-16 12:16:46.246 [INFO][5107] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.68/26] handle="k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:46.283539 containerd[2086]: 2025-12-16 12:16:46.246 [INFO][5107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:16:46.283539 containerd[2086]: 2025-12-16 12:16:46.246 [INFO][5107] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.68/26] IPv6=[] ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" HandleID="k8s-pod-network.7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Workload="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" Dec 16 12:16:46.283641 containerd[2086]: 2025-12-16 12:16:46.256 [INFO][5088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Namespace="kube-system" Pod="coredns-668d6bf9bc-8kg2p" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4505b85b-9178-4a13-950e-42d575e3f166", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"", Pod:"coredns-668d6bf9bc-8kg2p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b6bb02881b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:46.283641 containerd[2086]: 2025-12-16 12:16:46.257 [INFO][5088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.68/32] ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Namespace="kube-system" Pod="coredns-668d6bf9bc-8kg2p" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" Dec 16 12:16:46.283641 containerd[2086]: 2025-12-16 12:16:46.257 [INFO][5088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b6bb02881b ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Namespace="kube-system" Pod="coredns-668d6bf9bc-8kg2p" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" Dec 16 12:16:46.283641 containerd[2086]: 2025-12-16 12:16:46.265 [INFO][5088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Namespace="kube-system" Pod="coredns-668d6bf9bc-8kg2p" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" Dec 16 12:16:46.283641 containerd[2086]: 2025-12-16 12:16:46.266 [INFO][5088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Namespace="kube-system" Pod="coredns-668d6bf9bc-8kg2p" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4505b85b-9178-4a13-950e-42d575e3f166", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1", Pod:"coredns-668d6bf9bc-8kg2p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b6bb02881b", MAC:"1a:db:d5:ce:ed:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:46.283641 containerd[2086]: 2025-12-16 12:16:46.281 [INFO][5088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" Namespace="kube-system" Pod="coredns-668d6bf9bc-8kg2p" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--8kg2p-eth0" Dec 16 12:16:46.295000 audit: BPF prog-id=240 op=LOAD Dec 16 12:16:46.301505 kernel: audit: type=1334 audit(1765887406.295:694): prog-id=240 op=LOAD Dec 16 12:16:46.300000 audit: BPF prog-id=241 op=LOAD Dec 16 12:16:46.300000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=5140 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432633331666635356466633633616332663266393362626132653931 Dec 16 12:16:46.300000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:16:46.300000 audit[5153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5140 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432633331666635356466633633616332663266393362626132653931 Dec 16 12:16:46.301000 audit: BPF prog-id=242 op=LOAD Dec 16 12:16:46.301000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=5140 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432633331666635356466633633616332663266393362626132653931 Dec 16 12:16:46.301000 audit: BPF prog-id=243 op=LOAD Dec 16 12:16:46.301000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=5140 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432633331666635356466633633616332663266393362626132653931 Dec 16 12:16:46.301000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:16:46.301000 audit[5153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5140 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432633331666635356466633633616332663266393362626132653931 Dec 16 12:16:46.301000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:16:46.301000 audit[5153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5140 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432633331666635356466633633616332663266393362626132653931 Dec 16 12:16:46.301000 audit: BPF prog-id=244 op=LOAD Dec 16 12:16:46.301000 audit[5153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=5140 pid=5153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432633331666635356466633633616332663266393362626132653931 Dec 16 12:16:46.303000 audit[5179]: NETFILTER_CFG table=filter:132 family=2 entries=50 op=nft_register_chain pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:46.303000 audit[5179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=ffffed1390f0 a2=0 a3=ffff99a7dfa8 items=0 ppid=4828 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.303000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:46.323692 containerd[2086]: time="2025-12-16T12:16:46.323660545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mk7sc,Uid:03762c75-9df3-49a2-9166-fb2b4578d7a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2c31ff55dfc63ac2f2f93bba2e91a3f4dbe4b68ef1d8d63705b422765159fe9\"" Dec 16 12:16:46.326569 containerd[2086]: time="2025-12-16T12:16:46.326535675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:16:46.330225 containerd[2086]: time="2025-12-16T12:16:46.330165350Z" level=info msg="connecting to shim 7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1" address="unix:///run/containerd/s/c14c01a33767ff492f022f666e3a75b4f773450823aec3e55d97508778a86297" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:46.346590 systemd[1]: Started cri-containerd-7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1.scope - libcontainer container 7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1. Dec 16 12:16:46.352000 audit: BPF prog-id=245 op=LOAD Dec 16 12:16:46.353000 audit: BPF prog-id=246 op=LOAD Dec 16 12:16:46.353000 audit[5207]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5195 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730353964653163616632633832333738653562323566616332613637 Dec 16 12:16:46.353000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:16:46.353000 audit[5207]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5195 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730353964653163616632633832333738653562323566616332613637 Dec 16 12:16:46.353000 audit: BPF prog-id=247 op=LOAD Dec 16 12:16:46.353000 audit[5207]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5195 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730353964653163616632633832333738653562323566616332613637 Dec 16 12:16:46.353000 audit: BPF prog-id=248 op=LOAD Dec 16 12:16:46.353000 audit[5207]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5195 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730353964653163616632633832333738653562323566616332613637 Dec 16 12:16:46.354000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:16:46.354000 audit[5207]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5195 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730353964653163616632633832333738653562323566616332613637 Dec 16 12:16:46.354000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:16:46.354000 audit[5207]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5195 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730353964653163616632633832333738653562323566616332613637 Dec 16 12:16:46.354000 audit: BPF prog-id=249 op=LOAD Dec 16 12:16:46.354000 audit[5207]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5195 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730353964653163616632633832333738653562323566616332613637 Dec 16 12:16:46.376593 containerd[2086]: time="2025-12-16T12:16:46.376562627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8kg2p,Uid:4505b85b-9178-4a13-950e-42d575e3f166,Namespace:kube-system,Attempt:0,} returns sandbox id \"7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1\"" Dec 16 12:16:46.378883 containerd[2086]: time="2025-12-16T12:16:46.378861113Z" level=info msg="CreateContainer within sandbox \"7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:16:46.402179 containerd[2086]: time="2025-12-16T12:16:46.402149074Z" level=info msg="Container a717579821df8a35dd2e5172c2d0840d5a6867187dbf33ec6a7438fde1e8c081: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:46.414871 containerd[2086]: time="2025-12-16T12:16:46.414840890Z" level=info msg="CreateContainer within sandbox \"7059de1caf2c82378e5b25fac2a677b25de2f30eb5bbf885c364ad6683796cb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a717579821df8a35dd2e5172c2d0840d5a6867187dbf33ec6a7438fde1e8c081\"" Dec 16 12:16:46.415398 containerd[2086]: time="2025-12-16T12:16:46.415351980Z" level=info msg="StartContainer for \"a717579821df8a35dd2e5172c2d0840d5a6867187dbf33ec6a7438fde1e8c081\"" Dec 16 12:16:46.416079 containerd[2086]: time="2025-12-16T12:16:46.416052380Z" level=info msg="connecting to shim a717579821df8a35dd2e5172c2d0840d5a6867187dbf33ec6a7438fde1e8c081" address="unix:///run/containerd/s/c14c01a33767ff492f022f666e3a75b4f773450823aec3e55d97508778a86297" protocol=ttrpc version=3 Dec 16 12:16:46.429616 systemd[1]: Started cri-containerd-a717579821df8a35dd2e5172c2d0840d5a6867187dbf33ec6a7438fde1e8c081.scope - libcontainer container a717579821df8a35dd2e5172c2d0840d5a6867187dbf33ec6a7438fde1e8c081. Dec 16 12:16:46.431550 systemd-networkd[1678]: cali469d61308f2: Gained IPv6LL Dec 16 12:16:46.437000 audit: BPF prog-id=250 op=LOAD Dec 16 12:16:46.438000 audit: BPF prog-id=251 op=LOAD Dec 16 12:16:46.438000 audit[5234]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5195 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137313735373938323164663861333564643265353137326332643038 Dec 16 12:16:46.438000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:16:46.438000 audit[5234]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5195 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137313735373938323164663861333564643265353137326332643038 Dec 16 12:16:46.438000 audit: BPF prog-id=252 op=LOAD Dec 16 12:16:46.438000 audit[5234]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5195 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137313735373938323164663861333564643265353137326332643038 Dec 16 12:16:46.438000 audit: BPF prog-id=253 op=LOAD Dec 16 12:16:46.438000 audit[5234]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5195 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137313735373938323164663861333564643265353137326332643038 Dec 16 12:16:46.438000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:16:46.438000 audit[5234]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5195 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137313735373938323164663861333564643265353137326332643038 Dec 16 12:16:46.438000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:16:46.438000 audit[5234]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5195 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137313735373938323164663861333564643265353137326332643038 Dec 16 12:16:46.438000 audit: BPF prog-id=254 op=LOAD Dec 16 12:16:46.438000 audit[5234]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5195 pid=5234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:46.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137313735373938323164663861333564643265353137326332643038 Dec 16 12:16:46.458841 containerd[2086]: time="2025-12-16T12:16:46.458815916Z" level=info msg="StartContainer for \"a717579821df8a35dd2e5172c2d0840d5a6867187dbf33ec6a7438fde1e8c081\" returns successfully" Dec 16 12:16:46.622183 containerd[2086]: time="2025-12-16T12:16:46.621939583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:46.624825 containerd[2086]: time="2025-12-16T12:16:46.624790728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:16:46.625035 containerd[2086]: time="2025-12-16T12:16:46.624876491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:46.625074 kubelet[3679]: E1216 12:16:46.625036 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:16:46.625776 kubelet[3679]: E1216 12:16:46.625083 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:16:46.625776 kubelet[3679]: E1216 12:16:46.625180 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd6kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:46.627704 containerd[2086]: time="2025-12-16T12:16:46.627637169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:16:46.914323 containerd[2086]: time="2025-12-16T12:16:46.914083861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:46.916775 containerd[2086]: time="2025-12-16T12:16:46.916670853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:16:46.916775 containerd[2086]: time="2025-12-16T12:16:46.916706822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:46.917315 kubelet[3679]: E1216 12:16:46.916966 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:16:46.917315 kubelet[3679]: E1216 12:16:46.917010 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:16:46.917315 kubelet[3679]: E1216 12:16:46.917100 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd6kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:46.918363 kubelet[3679]: E1216 12:16:46.918337 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:46.983802 containerd[2086]: time="2025-12-16T12:16:46.983768746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff77cd6-hb2wg,Uid:4bd414c4-3198-4659-bd4d-34927e106bf1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:16:47.076895 systemd-networkd[1678]: calie028b223ffc: Link UP Dec 16 12:16:47.077682 systemd-networkd[1678]: calie028b223ffc: Gained carrier Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.017 [INFO][5268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0 calico-apiserver-7d6ff77cd6- calico-apiserver 4bd414c4-3198-4659-bd4d-34927e106bf1 833 0 2025-12-16 12:16:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d6ff77cd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-8648328498 calico-apiserver-7d6ff77cd6-hb2wg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie028b223ffc [] [] }} ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-hb2wg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.018 [INFO][5268] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-hb2wg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.041 [INFO][5279] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" HandleID="k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Workload="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.041 [INFO][5279] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" HandleID="k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Workload="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-8648328498", "pod":"calico-apiserver-7d6ff77cd6-hb2wg", "timestamp":"2025-12-16 12:16:47.041469063 +0000 UTC"}, Hostname:"ci-4547.0.0-a-8648328498", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.041 [INFO][5279] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.041 [INFO][5279] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.041 [INFO][5279] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-8648328498' Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.048 [INFO][5279] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.051 [INFO][5279] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.055 [INFO][5279] ipam/ipam.go 511: Trying affinity for 192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.056 [INFO][5279] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.058 [INFO][5279] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.058 [INFO][5279] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.059 [INFO][5279] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.063 [INFO][5279] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.072 [INFO][5279] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.69/26] block=192.168.37.64/26 handle="k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.072 [INFO][5279] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.69/26] handle="k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.072 [INFO][5279] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:16:47.098575 containerd[2086]: 2025-12-16 12:16:47.072 [INFO][5279] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.69/26] IPv6=[] ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" HandleID="k8s-pod-network.fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Workload="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" Dec 16 12:16:47.099144 containerd[2086]: 2025-12-16 12:16:47.074 [INFO][5268] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-hb2wg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0", GenerateName:"calico-apiserver-7d6ff77cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bd414c4-3198-4659-bd4d-34927e106bf1", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff77cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"", Pod:"calico-apiserver-7d6ff77cd6-hb2wg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie028b223ffc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:47.099144 containerd[2086]: 2025-12-16 12:16:47.074 [INFO][5268] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.69/32] ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-hb2wg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" Dec 16 12:16:47.099144 containerd[2086]: 2025-12-16 12:16:47.074 [INFO][5268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie028b223ffc ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-hb2wg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" Dec 16 12:16:47.099144 containerd[2086]: 2025-12-16 12:16:47.077 [INFO][5268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-hb2wg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" Dec 16 12:16:47.099144 containerd[2086]: 2025-12-16 12:16:47.078 [INFO][5268] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-hb2wg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0", GenerateName:"calico-apiserver-7d6ff77cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bd414c4-3198-4659-bd4d-34927e106bf1", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d6ff77cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb", Pod:"calico-apiserver-7d6ff77cd6-hb2wg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie028b223ffc", MAC:"9a:94:8c:24:18:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:47.099144 containerd[2086]: 2025-12-16 12:16:47.092 [INFO][5268] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" Namespace="calico-apiserver" Pod="calico-apiserver-7d6ff77cd6-hb2wg" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--apiserver--7d6ff77cd6--hb2wg-eth0" Dec 16 12:16:47.106000 audit[5294]: NETFILTER_CFG table=filter:133 family=2 entries=49 op=nft_register_chain pid=5294 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:47.106000 audit[5294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=ffffe3b08b80 a2=0 a3=ffffa3233fa8 items=0 ppid=4828 pid=5294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.106000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:47.124671 kubelet[3679]: E1216 12:16:47.124581 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:47.128584 kubelet[3679]: E1216 12:16:47.128552 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:16:47.138034 containerd[2086]: time="2025-12-16T12:16:47.137961229Z" level=info msg="connecting to shim fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb" address="unix:///run/containerd/s/f8a785913b2befee4567c918c0f2eba7b063ee1dd0ae1f6bc14190a7cdf3fa1b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:47.161597 systemd[1]: Started cri-containerd-fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb.scope - libcontainer container fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb. Dec 16 12:16:47.170000 audit: BPF prog-id=255 op=LOAD Dec 16 12:16:47.171000 audit: BPF prog-id=256 op=LOAD Dec 16 12:16:47.171000 audit[5315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5302 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353866323865643930376164303365346438646233623966316539 Dec 16 12:16:47.172000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:16:47.172000 audit[5315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5302 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353866323865643930376164303365346438646233623966316539 Dec 16 12:16:47.172000 audit: BPF prog-id=257 op=LOAD Dec 16 12:16:47.172000 audit[5315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5302 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353866323865643930376164303365346438646233623966316539 Dec 16 12:16:47.172000 audit: BPF prog-id=258 op=LOAD Dec 16 12:16:47.172000 audit[5315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5302 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353866323865643930376164303365346438646233623966316539 Dec 16 12:16:47.172000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:16:47.172000 audit[5315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5302 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353866323865643930376164303365346438646233623966316539 Dec 16 12:16:47.172000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:16:47.172000 audit[5315]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5302 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353866323865643930376164303365346438646233623966316539 Dec 16 12:16:47.172000 audit: BPF prog-id=259 op=LOAD Dec 16 12:16:47.172000 audit[5315]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5302 pid=5315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664353866323865643930376164303365346438646233623966316539 Dec 16 12:16:47.195941 kubelet[3679]: I1216 12:16:47.195808 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8kg2p" podStartSLOduration=36.195699012 podStartE2EDuration="36.195699012s" podCreationTimestamp="2025-12-16 12:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:47.194657848 +0000 UTC m=+40.287583397" watchObservedRunningTime="2025-12-16 12:16:47.195699012 +0000 UTC m=+40.288624601" Dec 16 12:16:47.200057 containerd[2086]: time="2025-12-16T12:16:47.200020247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d6ff77cd6-hb2wg,Uid:4bd414c4-3198-4659-bd4d-34927e106bf1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fd58f28ed907ad03e4d8db3b9f1e99801d7d94c5c276215b33fdf3f092b6bfbb\"" Dec 16 12:16:47.202385 containerd[2086]: time="2025-12-16T12:16:47.202208265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:16:47.209000 audit[5342]: NETFILTER_CFG table=filter:134 family=2 entries=17 op=nft_register_rule pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:47.209000 audit[5342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd23f9a30 a2=0 a3=1 items=0 ppid=3827 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:47.213000 audit[5342]: NETFILTER_CFG table=nat:135 family=2 entries=35 op=nft_register_chain pid=5342 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:47.213000 audit[5342]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd23f9a30 a2=0 a3=1 items=0 ppid=3827 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:47.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:47.468895 containerd[2086]: time="2025-12-16T12:16:47.468564705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:47.471650 containerd[2086]: time="2025-12-16T12:16:47.471609961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:16:47.471725 containerd[2086]: time="2025-12-16T12:16:47.471683283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:47.472084 kubelet[3679]: E1216 12:16:47.472044 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:47.472187 kubelet[3679]: E1216 12:16:47.472095 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:16:47.472540 kubelet[3679]: E1216 12:16:47.472313 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdnmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d6ff77cd6-hb2wg_calico-apiserver(4bd414c4-3198-4659-bd4d-34927e106bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:47.473515 kubelet[3679]: E1216 12:16:47.473484 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:16:47.711634 systemd-networkd[1678]: cali3b6bb02881b: Gained IPv6LL Dec 16 12:16:47.982769 containerd[2086]: time="2025-12-16T12:16:47.982731072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-db96bddb4-wqn8x,Uid:e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:48.084265 systemd-networkd[1678]: calicf964a0f91a: Link UP Dec 16 12:16:48.084424 systemd-networkd[1678]: calicf964a0f91a: Gained carrier Dec 16 12:16:48.095593 systemd-networkd[1678]: calif67d07088b0: Gained IPv6LL Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.023 [INFO][5350] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0 calico-kube-controllers-db96bddb4- calico-system e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9 826 0 2025-12-16 12:16:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:db96bddb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-a-8648328498 calico-kube-controllers-db96bddb4-wqn8x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicf964a0f91a [] [] }} ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Namespace="calico-system" Pod="calico-kube-controllers-db96bddb4-wqn8x" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.023 [INFO][5350] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Namespace="calico-system" Pod="calico-kube-controllers-db96bddb4-wqn8x" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.044 [INFO][5362] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" HandleID="k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Workload="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.044 [INFO][5362] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" HandleID="k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Workload="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-8648328498", "pod":"calico-kube-controllers-db96bddb4-wqn8x", "timestamp":"2025-12-16 12:16:48.044779817 +0000 UTC"}, Hostname:"ci-4547.0.0-a-8648328498", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.045 [INFO][5362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.045 [INFO][5362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.045 [INFO][5362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-8648328498' Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.052 [INFO][5362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.055 [INFO][5362] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.059 [INFO][5362] ipam/ipam.go 511: Trying affinity for 192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.060 [INFO][5362] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.062 [INFO][5362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.062 [INFO][5362] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.063 [INFO][5362] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949 Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.068 [INFO][5362] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.079 [INFO][5362] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.70/26] block=192.168.37.64/26 handle="k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.079 [INFO][5362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.70/26] handle="k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.079 [INFO][5362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:16:48.101279 containerd[2086]: 2025-12-16 12:16:48.079 [INFO][5362] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.70/26] IPv6=[] ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" HandleID="k8s-pod-network.ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Workload="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" Dec 16 12:16:48.102656 containerd[2086]: 2025-12-16 12:16:48.080 [INFO][5350] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Namespace="calico-system" Pod="calico-kube-controllers-db96bddb4-wqn8x" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0", GenerateName:"calico-kube-controllers-db96bddb4-", Namespace:"calico-system", SelfLink:"", UID:"e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"db96bddb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"", Pod:"calico-kube-controllers-db96bddb4-wqn8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf964a0f91a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:48.102656 containerd[2086]: 2025-12-16 12:16:48.080 [INFO][5350] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.70/32] ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Namespace="calico-system" Pod="calico-kube-controllers-db96bddb4-wqn8x" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" Dec 16 12:16:48.102656 containerd[2086]: 2025-12-16 12:16:48.080 [INFO][5350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf964a0f91a ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Namespace="calico-system" Pod="calico-kube-controllers-db96bddb4-wqn8x" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" Dec 16 12:16:48.102656 containerd[2086]: 2025-12-16 12:16:48.084 [INFO][5350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Namespace="calico-system" Pod="calico-kube-controllers-db96bddb4-wqn8x" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" Dec 16 12:16:48.102656 containerd[2086]: 2025-12-16 12:16:48.084 [INFO][5350] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Namespace="calico-system" Pod="calico-kube-controllers-db96bddb4-wqn8x" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0", GenerateName:"calico-kube-controllers-db96bddb4-", Namespace:"calico-system", SelfLink:"", UID:"e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"db96bddb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949", Pod:"calico-kube-controllers-db96bddb4-wqn8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf964a0f91a", MAC:"ca:c0:e8:2e:01:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:48.102656 containerd[2086]: 2025-12-16 12:16:48.096 [INFO][5350] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" Namespace="calico-system" Pod="calico-kube-controllers-db96bddb4-wqn8x" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-calico--kube--controllers--db96bddb4--wqn8x-eth0" Dec 16 12:16:48.111000 audit[5376]: NETFILTER_CFG table=filter:136 family=2 entries=52 op=nft_register_chain pid=5376 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:48.111000 audit[5376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24328 a0=3 a1=ffffdd7d48c0 a2=0 a3=ffffb8ae0fa8 items=0 ppid=4828 pid=5376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.111000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:48.131061 kubelet[3679]: E1216 12:16:48.130186 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:16:48.132102 kubelet[3679]: E1216 12:16:48.132062 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:16:48.143572 containerd[2086]: time="2025-12-16T12:16:48.143510212Z" level=info msg="connecting to shim ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949" address="unix:///run/containerd/s/46a56d7ff792cdc3d2fe8200a6f42089d4d7ddcad759daf5b723451af2f54207" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:48.173865 systemd[1]: Started cri-containerd-ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949.scope - libcontainer container ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949. Dec 16 12:16:48.237000 audit[5418]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5418 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:48.237000 audit[5418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee275cf0 a2=0 a3=1 items=0 ppid=3827 pid=5418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:48.238000 audit: BPF prog-id=260 op=LOAD Dec 16 12:16:48.238000 audit: BPF prog-id=261 op=LOAD Dec 16 12:16:48.238000 audit[5398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5385 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316265383738303264633761656161333563373762393364616237 Dec 16 12:16:48.238000 audit: BPF prog-id=261 op=UNLOAD Dec 16 12:16:48.238000 audit[5398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316265383738303264633761656161333563373762393364616237 Dec 16 12:16:48.239000 audit: BPF prog-id=262 op=LOAD Dec 16 12:16:48.239000 audit[5398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5385 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316265383738303264633761656161333563373762393364616237 Dec 16 12:16:48.239000 audit: BPF prog-id=263 op=LOAD Dec 16 12:16:48.239000 audit[5398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5385 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316265383738303264633761656161333563373762393364616237 Dec 16 12:16:48.239000 audit: BPF prog-id=263 op=UNLOAD Dec 16 12:16:48.239000 audit[5398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316265383738303264633761656161333563373762393364616237 Dec 16 12:16:48.239000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:16:48.239000 audit[5398]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5385 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316265383738303264633761656161333563373762393364616237 Dec 16 12:16:48.239000 audit: BPF prog-id=264 op=LOAD Dec 16 12:16:48.239000 audit[5398]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5385 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361316265383738303264633761656161333563373762393364616237 Dec 16 12:16:48.242000 audit[5418]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5418 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:48.242000 audit[5418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffee275cf0 a2=0 a3=1 items=0 ppid=3827 pid=5418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:48.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:48.261392 containerd[2086]: time="2025-12-16T12:16:48.261356450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-db96bddb4-wqn8x,Uid:e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca1be87802dc7aeaa35c77b93dab7feee6287b28dd32b32c32b29d6d47008949\"" Dec 16 12:16:48.263088 containerd[2086]: time="2025-12-16T12:16:48.263068084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:16:48.479648 systemd-networkd[1678]: calie028b223ffc: Gained IPv6LL Dec 16 12:16:48.608920 containerd[2086]: time="2025-12-16T12:16:48.608848044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:48.611654 containerd[2086]: time="2025-12-16T12:16:48.611562913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:16:48.611654 containerd[2086]: time="2025-12-16T12:16:48.611621579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:48.611805 kubelet[3679]: E1216 12:16:48.611769 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:48.611878 kubelet[3679]: E1216 12:16:48.611813 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:16:48.612142 kubelet[3679]: E1216 12:16:48.611915 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvf8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-db96bddb4-wqn8x_calico-system(e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:48.613742 kubelet[3679]: E1216 12:16:48.613718 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:16:48.983185 containerd[2086]: time="2025-12-16T12:16:48.982889631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kgspd,Uid:ce9228ac-15bc-4471-a735-af33f7e24f95,Namespace:kube-system,Attempt:0,}" Dec 16 12:16:49.086909 systemd-networkd[1678]: cali7ed55f6c709: Link UP Dec 16 12:16:49.088150 systemd-networkd[1678]: cali7ed55f6c709: Gained carrier Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.014 [INFO][5425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0 coredns-668d6bf9bc- kube-system ce9228ac-15bc-4471-a735-af33f7e24f95 834 0 2025-12-16 12:16:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-8648328498 coredns-668d6bf9bc-kgspd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7ed55f6c709 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Namespace="kube-system" Pod="coredns-668d6bf9bc-kgspd" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.014 [INFO][5425] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Namespace="kube-system" Pod="coredns-668d6bf9bc-kgspd" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.037 [INFO][5438] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" HandleID="k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Workload="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.037 [INFO][5438] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" HandleID="k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Workload="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-8648328498", "pod":"coredns-668d6bf9bc-kgspd", "timestamp":"2025-12-16 12:16:49.03771697 +0000 UTC"}, Hostname:"ci-4547.0.0-a-8648328498", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.037 [INFO][5438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.037 [INFO][5438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.037 [INFO][5438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-8648328498' Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.043 [INFO][5438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.046 [INFO][5438] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.051 [INFO][5438] ipam/ipam.go 511: Trying affinity for 192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.053 [INFO][5438] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.056 [INFO][5438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.056 [INFO][5438] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.057 [INFO][5438] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39 Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.069 [INFO][5438] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.079 [INFO][5438] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.71/26] block=192.168.37.64/26 handle="k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.080 [INFO][5438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.71/26] handle="k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.080 [INFO][5438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:16:49.102188 containerd[2086]: 2025-12-16 12:16:49.080 [INFO][5438] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.71/26] IPv6=[] ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" HandleID="k8s-pod-network.086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Workload="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" Dec 16 12:16:49.102821 containerd[2086]: 2025-12-16 12:16:49.082 [INFO][5425] cni-plugin/k8s.go 418: Populated endpoint ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Namespace="kube-system" Pod="coredns-668d6bf9bc-kgspd" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ce9228ac-15bc-4471-a735-af33f7e24f95", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"", Pod:"coredns-668d6bf9bc-kgspd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ed55f6c709", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:49.102821 containerd[2086]: 2025-12-16 12:16:49.082 [INFO][5425] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.71/32] ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Namespace="kube-system" Pod="coredns-668d6bf9bc-kgspd" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" Dec 16 12:16:49.102821 containerd[2086]: 2025-12-16 12:16:49.082 [INFO][5425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ed55f6c709 ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Namespace="kube-system" Pod="coredns-668d6bf9bc-kgspd" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" Dec 16 12:16:49.102821 containerd[2086]: 2025-12-16 12:16:49.088 [INFO][5425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Namespace="kube-system" Pod="coredns-668d6bf9bc-kgspd" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" Dec 16 12:16:49.102821 containerd[2086]: 2025-12-16 12:16:49.088 [INFO][5425] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Namespace="kube-system" Pod="coredns-668d6bf9bc-kgspd" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ce9228ac-15bc-4471-a735-af33f7e24f95", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39", Pod:"coredns-668d6bf9bc-kgspd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7ed55f6c709", MAC:"7e:91:f4:fa:25:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:49.102821 containerd[2086]: 2025-12-16 12:16:49.100 [INFO][5425] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" Namespace="kube-system" Pod="coredns-668d6bf9bc-kgspd" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-coredns--668d6bf9bc--kgspd-eth0" Dec 16 12:16:49.113000 audit[5455]: NETFILTER_CFG table=filter:139 family=2 entries=52 op=nft_register_chain pid=5455 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:49.113000 audit[5455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23908 a0=3 a1=ffffcb2ce320 a2=0 a3=ffff8f5a5fa8 items=0 ppid=4828 pid=5455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.113000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:49.119574 systemd-networkd[1678]: calicf964a0f91a: Gained IPv6LL Dec 16 12:16:49.132363 kubelet[3679]: E1216 12:16:49.132323 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:16:49.133320 kubelet[3679]: E1216 12:16:49.133176 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:16:49.143398 containerd[2086]: time="2025-12-16T12:16:49.143340000Z" level=info msg="connecting to shim 086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39" address="unix:///run/containerd/s/e7a5478f5e60b1557b50bdb54756d41b40f2c6f7ac25ee4362e2fffdb0cb5844" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:49.173610 systemd[1]: Started cri-containerd-086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39.scope - libcontainer container 086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39. Dec 16 12:16:49.183000 audit: BPF prog-id=265 op=LOAD Dec 16 12:16:49.184000 audit: BPF prog-id=266 op=LOAD Dec 16 12:16:49.184000 audit[5476]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5465 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363231316262383734326466396262323030383138373564623533 Dec 16 12:16:49.184000 audit: BPF prog-id=266 op=UNLOAD Dec 16 12:16:49.184000 audit[5476]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5465 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363231316262383734326466396262323030383138373564623533 Dec 16 12:16:49.184000 audit: BPF prog-id=267 op=LOAD Dec 16 12:16:49.184000 audit[5476]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5465 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363231316262383734326466396262323030383138373564623533 Dec 16 12:16:49.184000 audit: BPF prog-id=268 op=LOAD Dec 16 12:16:49.184000 audit[5476]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5465 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363231316262383734326466396262323030383138373564623533 Dec 16 12:16:49.184000 audit: BPF prog-id=268 op=UNLOAD Dec 16 12:16:49.184000 audit[5476]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5465 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363231316262383734326466396262323030383138373564623533 Dec 16 12:16:49.184000 audit: BPF prog-id=267 op=UNLOAD Dec 16 12:16:49.184000 audit[5476]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5465 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363231316262383734326466396262323030383138373564623533 Dec 16 12:16:49.184000 audit: BPF prog-id=269 op=LOAD Dec 16 12:16:49.184000 audit[5476]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5465 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038363231316262383734326466396262323030383138373564623533 Dec 16 12:16:49.206209 containerd[2086]: time="2025-12-16T12:16:49.206175084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kgspd,Uid:ce9228ac-15bc-4471-a735-af33f7e24f95,Namespace:kube-system,Attempt:0,} returns sandbox id \"086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39\"" Dec 16 12:16:49.209201 containerd[2086]: time="2025-12-16T12:16:49.209174714Z" level=info msg="CreateContainer within sandbox \"086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:16:49.229101 containerd[2086]: time="2025-12-16T12:16:49.229007621Z" level=info msg="Container 35673bc52573c7ee4852219921990b32c0c12a6925ceae99b354b359b9cd92e9: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:16:49.231333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81467621.mount: Deactivated successfully. Dec 16 12:16:49.240395 containerd[2086]: time="2025-12-16T12:16:49.240313406Z" level=info msg="CreateContainer within sandbox \"086211bb8742df9bb20081875db533694ca9e815615b64c3918748581fdb4c39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35673bc52573c7ee4852219921990b32c0c12a6925ceae99b354b359b9cd92e9\"" Dec 16 12:16:49.242698 containerd[2086]: time="2025-12-16T12:16:49.241830970Z" level=info msg="StartContainer for \"35673bc52573c7ee4852219921990b32c0c12a6925ceae99b354b359b9cd92e9\"" Dec 16 12:16:49.243474 containerd[2086]: time="2025-12-16T12:16:49.243164047Z" level=info msg="connecting to shim 35673bc52573c7ee4852219921990b32c0c12a6925ceae99b354b359b9cd92e9" address="unix:///run/containerd/s/e7a5478f5e60b1557b50bdb54756d41b40f2c6f7ac25ee4362e2fffdb0cb5844" protocol=ttrpc version=3 Dec 16 12:16:49.258573 systemd[1]: Started cri-containerd-35673bc52573c7ee4852219921990b32c0c12a6925ceae99b354b359b9cd92e9.scope - libcontainer container 35673bc52573c7ee4852219921990b32c0c12a6925ceae99b354b359b9cd92e9. Dec 16 12:16:49.265000 audit: BPF prog-id=270 op=LOAD Dec 16 12:16:49.266000 audit: BPF prog-id=271 op=LOAD Dec 16 12:16:49.266000 audit[5503]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=5465 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335363733626335323537336337656534383532323139393231393930 Dec 16 12:16:49.266000 audit: BPF prog-id=271 op=UNLOAD Dec 16 12:16:49.266000 audit[5503]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5465 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335363733626335323537336337656534383532323139393231393930 Dec 16 12:16:49.266000 audit: BPF prog-id=272 op=LOAD Dec 16 12:16:49.266000 audit[5503]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=5465 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335363733626335323537336337656534383532323139393231393930 Dec 16 12:16:49.266000 audit: BPF prog-id=273 op=LOAD Dec 16 12:16:49.266000 audit[5503]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=5465 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335363733626335323537336337656534383532323139393231393930 Dec 16 12:16:49.266000 audit: BPF prog-id=273 op=UNLOAD Dec 16 12:16:49.266000 audit[5503]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5465 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335363733626335323537336337656534383532323139393231393930 Dec 16 12:16:49.266000 audit: BPF prog-id=272 op=UNLOAD Dec 16 12:16:49.266000 audit[5503]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5465 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335363733626335323537336337656534383532323139393231393930 Dec 16 12:16:49.266000 audit: BPF prog-id=274 op=LOAD Dec 16 12:16:49.266000 audit[5503]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=5465 pid=5503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:49.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335363733626335323537336337656534383532323139393231393930 Dec 16 12:16:49.285778 containerd[2086]: time="2025-12-16T12:16:49.285755002Z" level=info msg="StartContainer for \"35673bc52573c7ee4852219921990b32c0c12a6925ceae99b354b359b9cd92e9\" returns successfully" Dec 16 12:16:49.983250 containerd[2086]: time="2025-12-16T12:16:49.983209683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gqbjj,Uid:2ce795cb-7cc9-42af-81d6-cb34fd295931,Namespace:calico-system,Attempt:0,}" Dec 16 12:16:50.081006 systemd-networkd[1678]: cali691f89f0a80: Link UP Dec 16 12:16:50.081170 systemd-networkd[1678]: cali691f89f0a80: Gained carrier Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.021 [INFO][5537] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0 goldmane-666569f655- calico-system 2ce795cb-7cc9-42af-81d6-cb34fd295931 830 0 2025-12-16 12:16:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-a-8648328498 goldmane-666569f655-gqbjj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali691f89f0a80 [] [] }} ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Namespace="calico-system" Pod="goldmane-666569f655-gqbjj" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.021 [INFO][5537] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Namespace="calico-system" Pod="goldmane-666569f655-gqbjj" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.039 [INFO][5550] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" HandleID="k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Workload="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.039 [INFO][5550] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" HandleID="k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Workload="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-8648328498", "pod":"goldmane-666569f655-gqbjj", "timestamp":"2025-12-16 12:16:50.03943247 +0000 UTC"}, Hostname:"ci-4547.0.0-a-8648328498", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.039 [INFO][5550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.039 [INFO][5550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.039 [INFO][5550] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-8648328498' Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.046 [INFO][5550] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.049 [INFO][5550] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.053 [INFO][5550] ipam/ipam.go 511: Trying affinity for 192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.054 [INFO][5550] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.057 [INFO][5550] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.64/26 host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.057 [INFO][5550] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.64/26 handle="k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.058 [INFO][5550] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.066 [INFO][5550] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.64/26 handle="k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.076 [INFO][5550] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.72/26] block=192.168.37.64/26 handle="k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.076 [INFO][5550] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.72/26] handle="k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" host="ci-4547.0.0-a-8648328498" Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.076 [INFO][5550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:16:50.095156 containerd[2086]: 2025-12-16 12:16:50.076 [INFO][5550] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.72/26] IPv6=[] ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" HandleID="k8s-pod-network.535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Workload="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" Dec 16 12:16:50.095800 containerd[2086]: 2025-12-16 12:16:50.077 [INFO][5537] cni-plugin/k8s.go 418: Populated endpoint ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Namespace="calico-system" Pod="goldmane-666569f655-gqbjj" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2ce795cb-7cc9-42af-81d6-cb34fd295931", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"", Pod:"goldmane-666569f655-gqbjj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali691f89f0a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:50.095800 containerd[2086]: 2025-12-16 12:16:50.077 [INFO][5537] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.72/32] ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Namespace="calico-system" Pod="goldmane-666569f655-gqbjj" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" Dec 16 12:16:50.095800 containerd[2086]: 2025-12-16 12:16:50.077 [INFO][5537] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali691f89f0a80 ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Namespace="calico-system" Pod="goldmane-666569f655-gqbjj" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" Dec 16 12:16:50.095800 containerd[2086]: 2025-12-16 12:16:50.080 [INFO][5537] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Namespace="calico-system" Pod="goldmane-666569f655-gqbjj" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" Dec 16 12:16:50.095800 containerd[2086]: 2025-12-16 12:16:50.081 [INFO][5537] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Namespace="calico-system" Pod="goldmane-666569f655-gqbjj" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2ce795cb-7cc9-42af-81d6-cb34fd295931", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-8648328498", ContainerID:"535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b", Pod:"goldmane-666569f655-gqbjj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali691f89f0a80", MAC:"6e:af:86:b9:f6:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:16:50.095800 containerd[2086]: 2025-12-16 12:16:50.092 [INFO][5537] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" Namespace="calico-system" Pod="goldmane-666569f655-gqbjj" WorkloadEndpoint="ci--4547.0.0--a--8648328498-k8s-goldmane--666569f655--gqbjj-eth0" Dec 16 12:16:50.107000 audit[5564]: NETFILTER_CFG table=filter:140 family=2 entries=68 op=nft_register_chain pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:16:50.107000 audit[5564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32308 a0=3 a1=ffffe6baa460 a2=0 a3=ffffa7207fa8 items=0 ppid=4828 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.107000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:16:50.128608 containerd[2086]: time="2025-12-16T12:16:50.128575138Z" level=info msg="connecting to shim 535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b" address="unix:///run/containerd/s/4fc1daabf699de8779bda55ce6e2fab4691589a517e4c55f5eb18d2062c5661c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:16:50.141191 kubelet[3679]: E1216 12:16:50.139925 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:16:50.160595 systemd[1]: Started cri-containerd-535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b.scope - libcontainer container 535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b. Dec 16 12:16:50.170000 audit: BPF prog-id=275 op=LOAD Dec 16 12:16:50.170000 audit: BPF prog-id=276 op=LOAD Dec 16 12:16:50.170000 audit[5585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5572 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.170000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356638303230333334653431623037383434353532633565313461 Dec 16 12:16:50.171000 audit: BPF prog-id=276 op=UNLOAD Dec 16 12:16:50.171000 audit[5585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5572 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356638303230333334653431623037383434353532633565313461 Dec 16 12:16:50.171000 audit: BPF prog-id=277 op=LOAD Dec 16 12:16:50.171000 audit[5585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5572 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356638303230333334653431623037383434353532633565313461 Dec 16 12:16:50.171000 audit: BPF prog-id=278 op=LOAD Dec 16 12:16:50.171000 audit[5585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5572 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356638303230333334653431623037383434353532633565313461 Dec 16 12:16:50.172000 audit: BPF prog-id=278 op=UNLOAD Dec 16 12:16:50.172000 audit[5585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5572 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356638303230333334653431623037383434353532633565313461 Dec 16 12:16:50.172000 audit: BPF prog-id=277 op=UNLOAD Dec 16 12:16:50.172000 audit[5585]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5572 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356638303230333334653431623037383434353532633565313461 Dec 16 12:16:50.174311 kubelet[3679]: I1216 12:16:50.173566 3679 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kgspd" podStartSLOduration=39.173553518 podStartE2EDuration="39.173553518s" podCreationTimestamp="2025-12-16 12:16:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:16:50.172529019 +0000 UTC m=+43.265454568" watchObservedRunningTime="2025-12-16 12:16:50.173553518 +0000 UTC m=+43.266479067" Dec 16 12:16:50.174000 audit: BPF prog-id=279 op=LOAD Dec 16 12:16:50.174000 audit[5585]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5572 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356638303230333334653431623037383434353532633565313461 Dec 16 12:16:50.198000 audit[5605]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:50.198000 audit[5605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc3a240d0 a2=0 a3=1 items=0 ppid=3827 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:50.205000 audit[5605]: NETFILTER_CFG table=nat:142 family=2 entries=44 op=nft_register_rule pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:50.205000 audit[5605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc3a240d0 a2=0 a3=1 items=0 ppid=3827 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.205000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:50.210777 containerd[2086]: time="2025-12-16T12:16:50.210747417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gqbjj,Uid:2ce795cb-7cc9-42af-81d6-cb34fd295931,Namespace:calico-system,Attempt:0,} returns sandbox id \"535f8020334e41b07844552c5e14a7997a92855ea52b3a3472ab45abc9f79e8b\"" Dec 16 12:16:50.212947 containerd[2086]: time="2025-12-16T12:16:50.212924115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:16:50.220000 audit[5614]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:50.220000 audit[5614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd5362240 a2=0 a3=1 items=0 ppid=3827 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:50.227000 audit[5614]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5614 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:50.227000 audit[5614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd5362240 a2=0 a3=1 items=0 ppid=3827 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:50.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:50.483876 containerd[2086]: time="2025-12-16T12:16:50.483824717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:50.486741 containerd[2086]: time="2025-12-16T12:16:50.486707023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:16:50.486831 containerd[2086]: time="2025-12-16T12:16:50.486789970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:50.486986 kubelet[3679]: E1216 12:16:50.486947 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:16:50.487054 kubelet[3679]: E1216 12:16:50.486997 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:16:50.487133 kubelet[3679]: E1216 12:16:50.487098 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glc2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gqbjj_calico-system(2ce795cb-7cc9-42af-81d6-cb34fd295931): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:50.488461 kubelet[3679]: E1216 12:16:50.488417 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:16:50.913954 kubelet[3679]: I1216 12:16:50.913774 3679 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:16:51.039618 systemd-networkd[1678]: cali7ed55f6c709: Gained IPv6LL Dec 16 12:16:51.142916 kubelet[3679]: E1216 12:16:51.142881 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:16:51.241000 audit[5666]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:51.244857 kernel: kauditd_printk_skb: 214 callbacks suppressed Dec 16 12:16:51.244935 kernel: audit: type=1325 audit(1765887411.241:771): table=filter:145 family=2 entries=14 op=nft_register_rule pid=5666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:51.241000 audit[5666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc42a8060 a2=0 a3=1 items=0 ppid=3827 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:51.270732 kernel: audit: type=1300 audit(1765887411.241:771): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc42a8060 a2=0 a3=1 items=0 ppid=3827 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:51.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:51.279782 kernel: audit: type=1327 audit(1765887411.241:771): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:51.271000 audit[5666]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:51.288713 kernel: audit: type=1325 audit(1765887411.271:772): table=nat:146 family=2 entries=20 op=nft_register_rule pid=5666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:16:51.271000 audit[5666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc42a8060 a2=0 a3=1 items=0 ppid=3827 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:51.305988 kernel: audit: type=1300 audit(1765887411.271:772): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc42a8060 a2=0 a3=1 items=0 ppid=3827 pid=5666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:16:51.271000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:51.315244 kernel: audit: type=1327 audit(1765887411.271:772): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:16:51.679583 systemd-networkd[1678]: cali691f89f0a80: Gained IPv6LL Dec 16 12:16:52.145311 kubelet[3679]: E1216 12:16:52.144315 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:16:56.987047 containerd[2086]: time="2025-12-16T12:16:56.987006182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:16:57.240828 containerd[2086]: time="2025-12-16T12:16:57.240582079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:57.243575 containerd[2086]: time="2025-12-16T12:16:57.243522115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:16:57.243679 containerd[2086]: time="2025-12-16T12:16:57.243557580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:57.243970 kubelet[3679]: E1216 12:16:57.243775 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:57.243970 kubelet[3679]: E1216 12:16:57.243828 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:16:57.243970 kubelet[3679]: E1216 12:16:57.243929 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fc68e7e88824b03a0c3d3e5f42811b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d759d65-5qwgg_calico-system(a5a3ab38-4175-48e7-a44e-56f2f099ecc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:57.246431 containerd[2086]: time="2025-12-16T12:16:57.246406780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:16:57.563317 containerd[2086]: time="2025-12-16T12:16:57.563276383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:16:57.565889 containerd[2086]: time="2025-12-16T12:16:57.565852310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:16:57.566988 containerd[2086]: time="2025-12-16T12:16:57.565931217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:16:57.567058 kubelet[3679]: E1216 12:16:57.566578 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:57.567058 kubelet[3679]: E1216 12:16:57.566625 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:16:57.567058 kubelet[3679]: E1216 12:16:57.566709 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d759d65-5qwgg_calico-system(a5a3ab38-4175-48e7-a44e-56f2f099ecc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:16:57.568093 kubelet[3679]: E1216 12:16:57.568051 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:17:00.984845 containerd[2086]: time="2025-12-16T12:17:00.984675410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:17:01.230038 containerd[2086]: time="2025-12-16T12:17:01.229881158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:01.232676 containerd[2086]: time="2025-12-16T12:17:01.232607042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:17:01.232760 containerd[2086]: time="2025-12-16T12:17:01.232660556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:01.232822 kubelet[3679]: E1216 12:17:01.232780 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:17:01.233067 kubelet[3679]: E1216 12:17:01.232824 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:17:01.233067 kubelet[3679]: E1216 12:17:01.232987 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvf8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-db96bddb4-wqn8x_calico-system(e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:01.233616 containerd[2086]: time="2025-12-16T12:17:01.233576699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:17:01.234626 kubelet[3679]: E1216 12:17:01.234592 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:17:01.554155 containerd[2086]: time="2025-12-16T12:17:01.554105853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:01.556885 containerd[2086]: time="2025-12-16T12:17:01.556836946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:01.556958 containerd[2086]: time="2025-12-16T12:17:01.556878155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:17:01.557173 kubelet[3679]: E1216 12:17:01.557129 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:01.557237 kubelet[3679]: E1216 12:17:01.557182 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:01.557320 kubelet[3679]: E1216 12:17:01.557285 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbkwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d6ff77cd6-97mm5_calico-apiserver(9cbc393d-9eb5-4eab-a130-293205187b74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:01.558591 kubelet[3679]: E1216 12:17:01.558566 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:17:01.984210 containerd[2086]: time="2025-12-16T12:17:01.983517822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:17:02.269644 containerd[2086]: time="2025-12-16T12:17:02.269601621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:02.272130 containerd[2086]: time="2025-12-16T12:17:02.272088146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:17:02.272218 containerd[2086]: time="2025-12-16T12:17:02.272164324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:02.272479 kubelet[3679]: E1216 12:17:02.272413 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:02.272817 kubelet[3679]: E1216 12:17:02.272493 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:02.272817 kubelet[3679]: E1216 12:17:02.272629 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdnmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d6ff77cd6-hb2wg_calico-apiserver(4bd414c4-3198-4659-bd4d-34927e106bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:02.273896 kubelet[3679]: E1216 12:17:02.273855 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:17:02.984466 containerd[2086]: time="2025-12-16T12:17:02.983686454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:17:03.226206 containerd[2086]: time="2025-12-16T12:17:03.226157061Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:03.229029 containerd[2086]: time="2025-12-16T12:17:03.228996621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:17:03.229099 containerd[2086]: time="2025-12-16T12:17:03.229060752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:03.229246 kubelet[3679]: E1216 12:17:03.229202 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:17:03.229307 kubelet[3679]: E1216 12:17:03.229253 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:17:03.229365 kubelet[3679]: E1216 12:17:03.229337 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd6kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:03.231561 containerd[2086]: time="2025-12-16T12:17:03.231533972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:17:03.494877 containerd[2086]: time="2025-12-16T12:17:03.494825317Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:03.501329 containerd[2086]: time="2025-12-16T12:17:03.501285928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:17:03.501432 containerd[2086]: time="2025-12-16T12:17:03.501370707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:03.502613 kubelet[3679]: E1216 12:17:03.502568 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:17:03.502613 kubelet[3679]: E1216 12:17:03.502622 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:17:03.502954 kubelet[3679]: E1216 12:17:03.502724 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd6kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:03.504164 kubelet[3679]: E1216 12:17:03.504104 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:17:03.985186 containerd[2086]: time="2025-12-16T12:17:03.984614351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:17:04.306332 containerd[2086]: time="2025-12-16T12:17:04.306286504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:04.309154 containerd[2086]: time="2025-12-16T12:17:04.309123584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:17:04.309247 containerd[2086]: time="2025-12-16T12:17:04.309141648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:04.309356 kubelet[3679]: E1216 12:17:04.309310 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:17:04.309356 kubelet[3679]: E1216 12:17:04.309356 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:17:04.309559 kubelet[3679]: E1216 12:17:04.309470 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glc2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gqbjj_calico-system(2ce795cb-7cc9-42af-81d6-cb34fd295931): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:04.311611 kubelet[3679]: E1216 12:17:04.311578 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:17:09.984021 kubelet[3679]: E1216 12:17:09.983945 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:17:13.985790 kubelet[3679]: E1216 12:17:13.985620 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:17:13.985790 kubelet[3679]: E1216 12:17:13.985713 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:17:14.985286 kubelet[3679]: E1216 12:17:14.985210 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:17:16.988471 kubelet[3679]: E1216 12:17:16.985262 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:17:18.989107 kubelet[3679]: E1216 12:17:18.988802 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:17:20.986840 containerd[2086]: time="2025-12-16T12:17:20.986798136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:17:21.304751 containerd[2086]: time="2025-12-16T12:17:21.304553259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:21.307466 containerd[2086]: time="2025-12-16T12:17:21.307377866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:17:21.307627 containerd[2086]: time="2025-12-16T12:17:21.307563216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:21.307945 kubelet[3679]: E1216 12:17:21.307867 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:17:21.307945 kubelet[3679]: E1216 12:17:21.307932 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:17:21.309129 kubelet[3679]: E1216 12:17:21.308371 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fc68e7e88824b03a0c3d3e5f42811b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d759d65-5qwgg_calico-system(a5a3ab38-4175-48e7-a44e-56f2f099ecc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:21.312557 containerd[2086]: time="2025-12-16T12:17:21.312515566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:17:21.594756 containerd[2086]: time="2025-12-16T12:17:21.594633289Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:21.603768 containerd[2086]: time="2025-12-16T12:17:21.603721801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:17:21.603866 containerd[2086]: time="2025-12-16T12:17:21.603801307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:21.604395 kubelet[3679]: E1216 12:17:21.603998 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:17:21.604395 kubelet[3679]: E1216 12:17:21.604048 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:17:21.604395 kubelet[3679]: E1216 12:17:21.604134 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d759d65-5qwgg_calico-system(a5a3ab38-4175-48e7-a44e-56f2f099ecc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:21.605554 kubelet[3679]: E1216 12:17:21.605521 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:17:25.984463 containerd[2086]: time="2025-12-16T12:17:25.984211006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:17:26.242993 containerd[2086]: time="2025-12-16T12:17:26.242760507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:26.245611 containerd[2086]: time="2025-12-16T12:17:26.245439781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:17:26.245678 containerd[2086]: time="2025-12-16T12:17:26.245469878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:26.245833 kubelet[3679]: E1216 12:17:26.245798 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:26.246101 kubelet[3679]: E1216 12:17:26.245842 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:26.246101 kubelet[3679]: E1216 12:17:26.245936 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbkwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d6ff77cd6-97mm5_calico-apiserver(9cbc393d-9eb5-4eab-a130-293205187b74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:26.247612 kubelet[3679]: E1216 12:17:26.247574 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:17:27.984389 containerd[2086]: time="2025-12-16T12:17:27.984254710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:17:28.266629 containerd[2086]: time="2025-12-16T12:17:28.266592513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:28.270379 containerd[2086]: time="2025-12-16T12:17:28.270345568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:17:28.270379 containerd[2086]: time="2025-12-16T12:17:28.270394633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:28.270647 kubelet[3679]: E1216 12:17:28.270608 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:17:28.271113 kubelet[3679]: E1216 12:17:28.270910 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:17:28.271245 kubelet[3679]: E1216 12:17:28.271208 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvf8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-db96bddb4-wqn8x_calico-system(e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:28.272668 kubelet[3679]: E1216 12:17:28.272636 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:17:28.985296 containerd[2086]: time="2025-12-16T12:17:28.984262129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:17:29.230474 containerd[2086]: time="2025-12-16T12:17:29.229054238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:29.233757 containerd[2086]: time="2025-12-16T12:17:29.233729251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:17:29.233947 containerd[2086]: time="2025-12-16T12:17:29.233853408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:29.234048 kubelet[3679]: E1216 12:17:29.234013 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:17:29.234088 kubelet[3679]: E1216 12:17:29.234057 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:17:29.234230 kubelet[3679]: E1216 12:17:29.234201 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd6kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:29.234770 containerd[2086]: time="2025-12-16T12:17:29.234748854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:17:29.547768 containerd[2086]: time="2025-12-16T12:17:29.547721319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:29.551100 containerd[2086]: time="2025-12-16T12:17:29.551070528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:17:29.551185 containerd[2086]: time="2025-12-16T12:17:29.551143731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:29.551524 kubelet[3679]: E1216 12:17:29.551289 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:29.551524 kubelet[3679]: E1216 12:17:29.551340 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:17:29.551831 kubelet[3679]: E1216 12:17:29.551575 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdnmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d6ff77cd6-hb2wg_calico-apiserver(4bd414c4-3198-4659-bd4d-34927e106bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:29.552842 containerd[2086]: time="2025-12-16T12:17:29.552629796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:17:29.553093 kubelet[3679]: E1216 12:17:29.552919 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:17:29.814645 containerd[2086]: time="2025-12-16T12:17:29.814318530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:29.819673 containerd[2086]: time="2025-12-16T12:17:29.819569411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:17:29.819673 containerd[2086]: time="2025-12-16T12:17:29.819643717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:29.820368 kubelet[3679]: E1216 12:17:29.820332 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:17:29.821208 kubelet[3679]: E1216 12:17:29.820472 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:17:29.821395 kubelet[3679]: E1216 12:17:29.821359 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd6kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:29.822786 kubelet[3679]: E1216 12:17:29.822645 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:17:30.984516 containerd[2086]: time="2025-12-16T12:17:30.984281023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:17:31.241276 containerd[2086]: time="2025-12-16T12:17:31.241067872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:17:31.243466 containerd[2086]: time="2025-12-16T12:17:31.243426191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:17:31.243664 containerd[2086]: time="2025-12-16T12:17:31.243493049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:17:31.243744 kubelet[3679]: E1216 12:17:31.243600 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:17:31.243744 kubelet[3679]: E1216 12:17:31.243636 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:17:31.244240 kubelet[3679]: E1216 12:17:31.244124 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glc2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gqbjj_calico-system(2ce795cb-7cc9-42af-81d6-cb34fd295931): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:17:31.245584 kubelet[3679]: E1216 12:17:31.245551 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:17:34.986160 kubelet[3679]: E1216 12:17:34.986114 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:17:37.984919 kubelet[3679]: E1216 12:17:37.984096 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:17:40.984551 kubelet[3679]: E1216 12:17:40.984480 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:17:42.984981 kubelet[3679]: E1216 12:17:42.984358 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:17:42.986158 kubelet[3679]: E1216 12:17:42.986124 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:17:42.987497 kubelet[3679]: E1216 12:17:42.987349 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:17:45.984794 kubelet[3679]: E1216 12:17:45.984723 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:17:52.986616 kubelet[3679]: E1216 12:17:52.986549 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:17:53.984817 kubelet[3679]: E1216 12:17:53.984527 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:17:54.987148 kubelet[3679]: E1216 12:17:54.987109 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:17:55.984207 kubelet[3679]: E1216 12:17:55.983707 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:17:56.985207 kubelet[3679]: E1216 12:17:56.985138 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:17:57.983889 kubelet[3679]: E1216 12:17:57.983857 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:18:03.984307 kubelet[3679]: E1216 12:18:03.984200 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:18:06.986511 kubelet[3679]: E1216 12:18:06.986439 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:18:06.987220 kubelet[3679]: E1216 12:18:06.986836 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:18:07.984493 kubelet[3679]: E1216 12:18:07.984432 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:18:09.984082 containerd[2086]: time="2025-12-16T12:18:09.984039142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:18:10.232482 containerd[2086]: time="2025-12-16T12:18:10.231974457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:10.234393 containerd[2086]: time="2025-12-16T12:18:10.234308001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:18:10.234735 containerd[2086]: time="2025-12-16T12:18:10.234697878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:10.234832 kubelet[3679]: E1216 12:18:10.234796 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:10.235537 kubelet[3679]: E1216 12:18:10.234853 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:10.235537 kubelet[3679]: E1216 12:18:10.234966 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdnmv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d6ff77cd6-hb2wg_calico-apiserver(4bd414c4-3198-4659-bd4d-34927e106bf1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:10.237029 kubelet[3679]: E1216 12:18:10.236974 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:18:11.984382 containerd[2086]: time="2025-12-16T12:18:11.983930426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:18:12.270768 containerd[2086]: time="2025-12-16T12:18:12.270592488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:12.273539 containerd[2086]: time="2025-12-16T12:18:12.273443921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:18:12.273722 containerd[2086]: time="2025-12-16T12:18:12.273481835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:12.273752 kubelet[3679]: E1216 12:18:12.273697 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:18:12.273752 kubelet[3679]: E1216 12:18:12.273747 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:18:12.274154 kubelet[3679]: E1216 12:18:12.273835 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fc68e7e88824b03a0c3d3e5f42811b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d759d65-5qwgg_calico-system(a5a3ab38-4175-48e7-a44e-56f2f099ecc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:12.276052 containerd[2086]: time="2025-12-16T12:18:12.276019689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:18:12.603712 containerd[2086]: time="2025-12-16T12:18:12.603571661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:12.607467 containerd[2086]: time="2025-12-16T12:18:12.607193793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:18:12.607467 containerd[2086]: time="2025-12-16T12:18:12.607274732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:12.607579 kubelet[3679]: E1216 12:18:12.607418 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:18:12.607579 kubelet[3679]: E1216 12:18:12.607495 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:18:12.607817 kubelet[3679]: E1216 12:18:12.607668 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdwlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-f8d759d65-5qwgg_calico-system(a5a3ab38-4175-48e7-a44e-56f2f099ecc8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:12.608976 kubelet[3679]: E1216 12:18:12.608860 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:18:17.964238 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:53766.service - OpenSSH per-connection server daemon (10.200.16.10:53766). Dec 16 12:18:17.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.11:22-10.200.16.10:53766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:17.979529 kernel: audit: type=1130 audit(1765887497.964:773): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.11:22-10.200.16.10:53766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:18.401000 audit[5792]: USER_ACCT pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.408534 sshd[5792]: Accepted publickey for core from 10.200.16.10 port 53766 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:18.417000 audit[5792]: CRED_ACQ pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.418710 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:18.433020 kernel: audit: type=1101 audit(1765887498.401:774): pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.433096 kernel: audit: type=1103 audit(1765887498.417:775): pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.445405 kernel: audit: type=1006 audit(1765887498.417:776): pid=5792 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:18:18.417000 audit[5792]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff531c2c0 a2=3 a3=0 items=0 ppid=1 pid=5792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.417000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:18.470906 kernel: audit: type=1300 audit(1765887498.417:776): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff531c2c0 a2=3 a3=0 items=0 ppid=1 pid=5792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:18.470962 kernel: audit: type=1327 audit(1765887498.417:776): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:18.474514 systemd-logind[2058]: New session 11 of user core. Dec 16 12:18:18.478582 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:18:18.481000 audit[5792]: USER_START pid=5792 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.501000 audit[5797]: CRED_ACQ pid=5797 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.515108 kernel: audit: type=1105 audit(1765887498.481:777): pid=5792 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.515159 kernel: audit: type=1103 audit(1765887498.501:778): pid=5797 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.736719 sshd[5797]: Connection closed by 10.200.16.10 port 53766 Dec 16 12:18:18.738968 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:18.740000 audit[5792]: USER_END pid=5792 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.757347 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:53766.service: Deactivated successfully. Dec 16 12:18:18.763061 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:18:18.767003 systemd-logind[2058]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:18:18.770011 systemd-logind[2058]: Removed session 11. Dec 16 12:18:18.740000 audit[5792]: CRED_DISP pid=5792 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.785585 kernel: audit: type=1106 audit(1765887498.740:779): pid=5792 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.785653 kernel: audit: type=1104 audit(1765887498.740:780): pid=5792 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:18.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.11:22-10.200.16.10:53766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:18.986148 containerd[2086]: time="2025-12-16T12:18:18.985926269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:18:19.246032 containerd[2086]: time="2025-12-16T12:18:19.245986642Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:19.249155 containerd[2086]: time="2025-12-16T12:18:19.249127401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:18:19.249486 containerd[2086]: time="2025-12-16T12:18:19.249193788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:19.249536 kubelet[3679]: E1216 12:18:19.249355 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:19.249536 kubelet[3679]: E1216 12:18:19.249395 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:18:19.249831 kubelet[3679]: E1216 12:18:19.249667 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbkwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d6ff77cd6-97mm5_calico-apiserver(9cbc393d-9eb5-4eab-a130-293205187b74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:19.250737 containerd[2086]: time="2025-12-16T12:18:19.250716030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:18:19.250907 kubelet[3679]: E1216 12:18:19.250883 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:18:19.638788 containerd[2086]: time="2025-12-16T12:18:19.638540063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:19.642933 containerd[2086]: time="2025-12-16T12:18:19.642818356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:18:19.642933 containerd[2086]: time="2025-12-16T12:18:19.642895454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:19.643181 kubelet[3679]: E1216 12:18:19.643134 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:18:19.643233 kubelet[3679]: E1216 12:18:19.643186 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:18:19.643340 kubelet[3679]: E1216 12:18:19.643300 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glc2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gqbjj_calico-system(2ce795cb-7cc9-42af-81d6-cb34fd295931): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:19.644926 kubelet[3679]: E1216 12:18:19.644895 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:18:21.985522 kubelet[3679]: E1216 12:18:21.984517 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:18:21.986404 containerd[2086]: time="2025-12-16T12:18:21.986372554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:18:22.246880 containerd[2086]: time="2025-12-16T12:18:22.246714153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:22.249175 containerd[2086]: time="2025-12-16T12:18:22.249144235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:18:22.249343 containerd[2086]: time="2025-12-16T12:18:22.249206677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:22.249619 kubelet[3679]: E1216 12:18:22.249572 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:18:22.249761 kubelet[3679]: E1216 12:18:22.249700 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:18:22.250183 kubelet[3679]: E1216 12:18:22.250139 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvf8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-db96bddb4-wqn8x_calico-system(e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:22.251420 kubelet[3679]: E1216 12:18:22.251385 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:18:22.986046 containerd[2086]: time="2025-12-16T12:18:22.985956690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:18:23.285085 containerd[2086]: time="2025-12-16T12:18:23.284919257Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:23.287289 containerd[2086]: time="2025-12-16T12:18:23.287179069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:18:23.287289 containerd[2086]: time="2025-12-16T12:18:23.287182005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:23.287429 kubelet[3679]: E1216 12:18:23.287393 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:18:23.287643 kubelet[3679]: E1216 12:18:23.287472 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:18:23.287643 kubelet[3679]: E1216 12:18:23.287605 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd6kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:23.289873 containerd[2086]: time="2025-12-16T12:18:23.289841510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:18:23.647628 containerd[2086]: time="2025-12-16T12:18:23.647504801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:18:23.651807 containerd[2086]: time="2025-12-16T12:18:23.651763128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:18:23.651893 containerd[2086]: time="2025-12-16T12:18:23.651842211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:18:23.652017 kubelet[3679]: E1216 12:18:23.651979 3679 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:18:23.652058 kubelet[3679]: E1216 12:18:23.652027 3679 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:18:23.652143 kubelet[3679]: E1216 12:18:23.652115 3679 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd6kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mk7sc_calico-system(03762c75-9df3-49a2-9166-fb2b4578d7a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:18:23.653458 kubelet[3679]: E1216 12:18:23.653385 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:18:23.831834 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:48740.service - OpenSSH per-connection server daemon (10.200.16.10:48740). Dec 16 12:18:23.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.11:22-10.200.16.10:48740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:23.834705 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:23.834767 kernel: audit: type=1130 audit(1765887503.831:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.11:22-10.200.16.10:48740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:24.263000 audit[5833]: USER_ACCT pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.268584 sshd[5833]: Accepted publickey for core from 10.200.16.10 port 48740 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:24.279707 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:24.278000 audit[5833]: CRED_ACQ pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.295414 kernel: audit: type=1101 audit(1765887504.263:783): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.295521 kernel: audit: type=1103 audit(1765887504.278:784): pid=5833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.304711 kernel: audit: type=1006 audit(1765887504.278:785): pid=5833 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 12:18:24.278000 audit[5833]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc786c470 a2=3 a3=0 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.320547 kernel: audit: type=1300 audit(1765887504.278:785): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc786c470 a2=3 a3=0 items=0 ppid=1 pid=5833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:24.278000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:24.327370 kernel: audit: type=1327 audit(1765887504.278:785): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:24.329849 systemd-logind[2058]: New session 12 of user core. Dec 16 12:18:24.339591 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:18:24.341000 audit[5833]: USER_START pid=5833 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.361000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.374701 kernel: audit: type=1105 audit(1765887504.341:786): pid=5833 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.374757 kernel: audit: type=1103 audit(1765887504.361:787): pid=5837 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.593545 sshd[5837]: Connection closed by 10.200.16.10 port 48740 Dec 16 12:18:24.594621 sshd-session[5833]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:24.596000 audit[5833]: USER_END pid=5833 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.599000 audit[5833]: CRED_DISP pid=5833 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.615759 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:48740.service: Deactivated successfully. Dec 16 12:18:24.617359 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:18:24.620494 systemd-logind[2058]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:18:24.622005 systemd-logind[2058]: Removed session 12. Dec 16 12:18:24.629557 kernel: audit: type=1106 audit(1765887504.596:788): pid=5833 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.630299 kernel: audit: type=1104 audit(1765887504.599:789): pid=5833 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:24.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.11:22-10.200.16.10:48740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:27.984775 kubelet[3679]: E1216 12:18:27.984699 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:18:29.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.11:22-10.200.16.10:48750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:29.679977 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:48750.service - OpenSSH per-connection server daemon (10.200.16.10:48750). Dec 16 12:18:29.683004 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:29.683062 kernel: audit: type=1130 audit(1765887509.679:791): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.11:22-10.200.16.10:48750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:30.113283 sshd[5849]: Accepted publickey for core from 10.200.16.10 port 48750 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:30.112000 audit[5849]: USER_ACCT pid=5849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.115187 sshd-session[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:30.114000 audit[5849]: CRED_ACQ pid=5849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.145704 kernel: audit: type=1101 audit(1765887510.112:792): pid=5849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.145775 kernel: audit: type=1103 audit(1765887510.114:793): pid=5849 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.150583 systemd-logind[2058]: New session 13 of user core. Dec 16 12:18:30.155545 kernel: audit: type=1006 audit(1765887510.114:794): pid=5849 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:18:30.114000 audit[5849]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda9bb350 a2=3 a3=0 items=0 ppid=1 pid=5849 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:30.172392 kernel: audit: type=1300 audit(1765887510.114:794): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda9bb350 a2=3 a3=0 items=0 ppid=1 pid=5849 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:30.114000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:30.174616 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:18:30.179551 kernel: audit: type=1327 audit(1765887510.114:794): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:30.180000 audit[5849]: USER_START pid=5849 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.201000 audit[5853]: CRED_ACQ pid=5853 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.214961 kernel: audit: type=1105 audit(1765887510.180:795): pid=5849 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.215021 kernel: audit: type=1103 audit(1765887510.201:796): pid=5853 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.399140 sshd[5853]: Connection closed by 10.200.16.10 port 48750 Dec 16 12:18:30.399895 sshd-session[5849]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:30.400000 audit[5849]: USER_END pid=5849 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.403519 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:18:30.403698 systemd-logind[2058]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:18:30.404428 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:48750.service: Deactivated successfully. Dec 16 12:18:30.408770 systemd-logind[2058]: Removed session 13. Dec 16 12:18:30.400000 audit[5849]: CRED_DISP pid=5849 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.434891 kernel: audit: type=1106 audit(1765887510.400:797): pid=5849 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.434959 kernel: audit: type=1104 audit(1765887510.400:798): pid=5849 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.11:22-10.200.16.10:48750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:30.487967 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:59540.service - OpenSSH per-connection server daemon (10.200.16.10:59540). Dec 16 12:18:30.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.11:22-10.200.16.10:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:30.905000 audit[5865]: USER_ACCT pid=5865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.906928 sshd[5865]: Accepted publickey for core from 10.200.16.10 port 59540 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:30.906000 audit[5865]: CRED_ACQ pid=5865 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.906000 audit[5865]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb7f54b0 a2=3 a3=0 items=0 ppid=1 pid=5865 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:30.906000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:30.907928 sshd-session[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:30.911676 systemd-logind[2058]: New session 14 of user core. Dec 16 12:18:30.917579 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:18:30.919000 audit[5865]: USER_START pid=5865 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:30.920000 audit[5869]: CRED_ACQ pid=5869 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:31.245474 sshd[5869]: Connection closed by 10.200.16.10 port 59540 Dec 16 12:18:31.245108 sshd-session[5865]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:31.247000 audit[5865]: USER_END pid=5865 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:31.247000 audit[5865]: CRED_DISP pid=5865 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:31.249754 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:18:31.251328 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:59540.service: Deactivated successfully. Dec 16 12:18:31.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.11:22-10.200.16.10:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:31.257763 systemd-logind[2058]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:18:31.260006 systemd-logind[2058]: Removed session 14. Dec 16 12:18:31.334656 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:59552.service - OpenSSH per-connection server daemon (10.200.16.10:59552). Dec 16 12:18:31.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.11:22-10.200.16.10:59552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:31.765000 audit[5879]: USER_ACCT pid=5879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:31.767676 sshd[5879]: Accepted publickey for core from 10.200.16.10 port 59552 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:31.768000 audit[5879]: CRED_ACQ pid=5879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:31.768000 audit[5879]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd88b64c0 a2=3 a3=0 items=0 ppid=1 pid=5879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:31.768000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:31.770247 sshd-session[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:31.777186 systemd-logind[2058]: New session 15 of user core. Dec 16 12:18:31.781589 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:18:31.782000 audit[5879]: USER_START pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:31.790000 audit[5883]: CRED_ACQ pid=5883 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:31.984518 kubelet[3679]: E1216 12:18:31.984467 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:18:32.044751 sshd[5883]: Connection closed by 10.200.16.10 port 59552 Dec 16 12:18:32.045589 sshd-session[5879]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:32.045000 audit[5879]: USER_END pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:32.046000 audit[5879]: CRED_DISP pid=5879 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:32.050268 systemd-logind[2058]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:18:32.050872 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:59552.service: Deactivated successfully. Dec 16 12:18:32.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.11:22-10.200.16.10:59552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:32.052502 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:18:32.053642 systemd-logind[2058]: Removed session 15. Dec 16 12:18:33.984478 kubelet[3679]: E1216 12:18:33.984333 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:18:34.985808 kubelet[3679]: E1216 12:18:34.985729 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:18:34.986848 kubelet[3679]: E1216 12:18:34.986817 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:18:37.134720 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:59556.service - OpenSSH per-connection server daemon (10.200.16.10:59556). Dec 16 12:18:37.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.11:22-10.200.16.10:59556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:37.137802 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:18:37.137872 kernel: audit: type=1130 audit(1765887517.133:818): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.11:22-10.200.16.10:59556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:37.572000 audit[5900]: USER_ACCT pid=5900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.574360 sshd[5900]: Accepted publickey for core from 10.200.16.10 port 59556 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:37.589000 audit[5900]: CRED_ACQ pid=5900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.592316 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:37.604913 kernel: audit: type=1101 audit(1765887517.572:819): pid=5900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.604969 kernel: audit: type=1103 audit(1765887517.589:820): pid=5900 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.614768 kernel: audit: type=1006 audit(1765887517.589:821): pid=5900 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:18:37.589000 audit[5900]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebb2b260 a2=3 a3=0 items=0 ppid=1 pid=5900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:37.631739 kernel: audit: type=1300 audit(1765887517.589:821): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffebb2b260 a2=3 a3=0 items=0 ppid=1 pid=5900 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:37.589000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:37.638092 kernel: audit: type=1327 audit(1765887517.589:821): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:37.640359 systemd-logind[2058]: New session 16 of user core. Dec 16 12:18:37.650588 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:18:37.717000 audit[5900]: USER_START pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.737000 audit[5904]: CRED_ACQ pid=5904 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.753495 kernel: audit: type=1105 audit(1765887517.717:822): pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.753574 kernel: audit: type=1103 audit(1765887517.737:823): pid=5904 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.963354 sshd[5904]: Connection closed by 10.200.16.10 port 59556 Dec 16 12:18:37.964188 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:37.964000 audit[5900]: USER_END pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.987846 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:59556.service: Deactivated successfully. Dec 16 12:18:37.990115 kubelet[3679]: E1216 12:18:37.990077 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:18:37.984000 audit[5900]: CRED_DISP pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:37.993745 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:18:38.003986 kernel: audit: type=1106 audit(1765887517.964:824): pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:38.004056 kernel: audit: type=1104 audit(1765887517.984:825): pid=5900 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:38.005594 systemd-logind[2058]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:18:37.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.11:22-10.200.16.10:59556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:38.007508 systemd-logind[2058]: Removed session 16. Dec 16 12:18:39.984471 kubelet[3679]: E1216 12:18:39.984280 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:18:43.054496 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:43.054603 kernel: audit: type=1130 audit(1765887523.048:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.11:22-10.200.16.10:39570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:43.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.11:22-10.200.16.10:39570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:43.049738 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:39570.service - OpenSSH per-connection server daemon (10.200.16.10:39570). Dec 16 12:18:43.461000 audit[5917]: USER_ACCT pid=5917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.479312 sshd[5917]: Accepted publickey for core from 10.200.16.10 port 39570 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:43.480529 sshd-session[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:43.478000 audit[5917]: CRED_ACQ pid=5917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.495975 kernel: audit: type=1101 audit(1765887523.461:828): pid=5917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.496039 kernel: audit: type=1103 audit(1765887523.478:829): pid=5917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.504424 kernel: audit: type=1006 audit(1765887523.478:830): pid=5917 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:18:43.478000 audit[5917]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1f973e0 a2=3 a3=0 items=0 ppid=1 pid=5917 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:43.520739 kernel: audit: type=1300 audit(1765887523.478:830): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1f973e0 a2=3 a3=0 items=0 ppid=1 pid=5917 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:43.478000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:43.527308 kernel: audit: type=1327 audit(1765887523.478:830): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:43.532387 systemd-logind[2058]: New session 17 of user core. Dec 16 12:18:43.536589 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:18:43.537000 audit[5917]: USER_START pid=5917 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.558467 kernel: audit: type=1105 audit(1765887523.537:831): pid=5917 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.558519 kernel: audit: type=1103 audit(1765887523.556:832): pid=5921 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.556000 audit[5921]: CRED_ACQ pid=5921 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.748950 sshd[5921]: Connection closed by 10.200.16.10 port 39570 Dec 16 12:18:43.748443 sshd-session[5917]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:43.748000 audit[5917]: USER_END pid=5917 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.755673 systemd-logind[2058]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:18:43.757696 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:39570.service: Deactivated successfully. Dec 16 12:18:43.759862 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:18:43.762158 systemd-logind[2058]: Removed session 17. Dec 16 12:18:43.748000 audit[5917]: CRED_DISP pid=5917 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.782332 kernel: audit: type=1106 audit(1765887523.748:833): pid=5917 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.782405 kernel: audit: type=1104 audit(1765887523.748:834): pid=5917 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:43.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.11:22-10.200.16.10:39570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:44.985247 kubelet[3679]: E1216 12:18:44.985181 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:18:46.985850 kubelet[3679]: E1216 12:18:46.985808 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:18:48.848812 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:39572.service - OpenSSH per-connection server daemon (10.200.16.10:39572). Dec 16 12:18:48.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.11:22-10.200.16.10:39572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:48.853235 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:48.853307 kernel: audit: type=1130 audit(1765887528.848:836): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.11:22-10.200.16.10:39572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:48.984654 kubelet[3679]: E1216 12:18:48.984600 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:18:49.293000 audit[5933]: USER_ACCT pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.301582 sshd[5933]: Accepted publickey for core from 10.200.16.10 port 39572 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:49.309999 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:49.308000 audit[5933]: CRED_ACQ pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.324994 kernel: audit: type=1101 audit(1765887529.293:837): pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.325060 kernel: audit: type=1103 audit(1765887529.308:838): pid=5933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.330531 systemd-logind[2058]: New session 18 of user core. Dec 16 12:18:49.334390 kernel: audit: type=1006 audit(1765887529.308:839): pid=5933 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 12:18:49.308000 audit[5933]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd051bfa0 a2=3 a3=0 items=0 ppid=1 pid=5933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:49.350619 kernel: audit: type=1300 audit(1765887529.308:839): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd051bfa0 a2=3 a3=0 items=0 ppid=1 pid=5933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:49.308000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:49.351753 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:18:49.357014 kernel: audit: type=1327 audit(1765887529.308:839): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:49.357000 audit[5933]: USER_START pid=5933 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.380405 kernel: audit: type=1105 audit(1765887529.357:840): pid=5933 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.359000 audit[5937]: CRED_ACQ pid=5937 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.393231 kernel: audit: type=1103 audit(1765887529.359:841): pid=5937 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.595137 sshd[5937]: Connection closed by 10.200.16.10 port 39572 Dec 16 12:18:49.595359 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:49.597000 audit[5933]: USER_END pid=5933 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.600615 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:18:49.602848 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:39572.service: Deactivated successfully. Dec 16 12:18:49.606859 systemd-logind[2058]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:18:49.611685 systemd-logind[2058]: Removed session 18. Dec 16 12:18:49.597000 audit[5933]: CRED_DISP pid=5933 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.637514 kernel: audit: type=1106 audit(1765887529.597:842): pid=5933 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.637576 kernel: audit: type=1104 audit(1765887529.597:843): pid=5933 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:49.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.11:22-10.200.16.10:39572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:49.984225 kubelet[3679]: E1216 12:18:49.983969 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:18:49.985284 kubelet[3679]: E1216 12:18:49.985248 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:18:50.990369 kubelet[3679]: E1216 12:18:50.990329 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:18:54.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.11:22-10.200.16.10:51158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:54.683672 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:51158.service - OpenSSH per-connection server daemon (10.200.16.10:51158). Dec 16 12:18:54.686895 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:18:54.686954 kernel: audit: type=1130 audit(1765887534.683:845): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.11:22-10.200.16.10:51158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:55.093000 audit[5972]: USER_ACCT pid=5972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.109293 sshd[5972]: Accepted publickey for core from 10.200.16.10 port 51158 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:55.110468 kernel: audit: type=1101 audit(1765887535.093:846): pid=5972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.110000 audit[5972]: CRED_ACQ pid=5972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.112143 sshd-session[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:55.132199 systemd-logind[2058]: New session 19 of user core. Dec 16 12:18:55.135507 kernel: audit: type=1103 audit(1765887535.110:847): pid=5972 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.135581 kernel: audit: type=1006 audit(1765887535.110:848): pid=5972 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 12:18:55.110000 audit[5972]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd59b5e0 a2=3 a3=0 items=0 ppid=1 pid=5972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:55.153103 kernel: audit: type=1300 audit(1765887535.110:848): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd59b5e0 a2=3 a3=0 items=0 ppid=1 pid=5972 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:55.110000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:55.155973 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:18:55.161211 kernel: audit: type=1327 audit(1765887535.110:848): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:55.159000 audit[5972]: USER_START pid=5972 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.180013 kernel: audit: type=1105 audit(1765887535.159:849): pid=5972 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.161000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.196680 kernel: audit: type=1103 audit(1765887535.161:850): pid=5976 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.385274 sshd[5976]: Connection closed by 10.200.16.10 port 51158 Dec 16 12:18:55.385672 sshd-session[5972]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:55.389000 audit[5972]: USER_END pid=5972 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.393934 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:51158.service: Deactivated successfully. Dec 16 12:18:55.397601 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:18:55.400007 systemd-logind[2058]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:18:55.402629 systemd-logind[2058]: Removed session 19. Dec 16 12:18:55.389000 audit[5972]: CRED_DISP pid=5972 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.431036 kernel: audit: type=1106 audit(1765887535.389:851): pid=5972 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.431114 kernel: audit: type=1104 audit(1765887535.389:852): pid=5972 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.11:22-10.200.16.10:51158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:55.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.11:22-10.200.16.10:51172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:55.478128 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:51172.service - OpenSSH per-connection server daemon (10.200.16.10:51172). Dec 16 12:18:55.903000 audit[5988]: USER_ACCT pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.903766 sshd[5988]: Accepted publickey for core from 10.200.16.10 port 51172 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:55.904000 audit[5988]: CRED_ACQ pid=5988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.904000 audit[5988]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffec25ad0 a2=3 a3=0 items=0 ppid=1 pid=5988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:55.904000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:55.906077 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:55.912083 systemd-logind[2058]: New session 20 of user core. Dec 16 12:18:55.919493 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:18:55.922000 audit[5988]: USER_START pid=5988 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:55.926000 audit[5992]: CRED_ACQ pid=5992 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:56.281945 sshd[5992]: Connection closed by 10.200.16.10 port 51172 Dec 16 12:18:56.282487 sshd-session[5988]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:56.283000 audit[5988]: USER_END pid=5988 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:56.284000 audit[5988]: CRED_DISP pid=5988 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:56.286785 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:51172.service: Deactivated successfully. Dec 16 12:18:56.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.11:22-10.200.16.10:51172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:56.289092 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:18:56.290379 systemd-logind[2058]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:18:56.291404 systemd-logind[2058]: Removed session 20. Dec 16 12:18:56.372950 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:51186.service - OpenSSH per-connection server daemon (10.200.16.10:51186). Dec 16 12:18:56.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.11:22-10.200.16.10:51186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:56.766000 audit[6001]: USER_ACCT pid=6001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:56.767595 sshd[6001]: Accepted publickey for core from 10.200.16.10 port 51186 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:56.767000 audit[6001]: CRED_ACQ pid=6001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:56.767000 audit[6001]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2478080 a2=3 a3=0 items=0 ppid=1 pid=6001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:56.767000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:56.768940 sshd-session[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:56.772489 systemd-logind[2058]: New session 21 of user core. Dec 16 12:18:56.777600 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:18:56.780000 audit[6001]: USER_START pid=6001 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:56.781000 audit[6005]: CRED_ACQ pid=6005 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:56.984677 kubelet[3679]: E1216 12:18:56.983885 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:18:57.508000 audit[6016]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:57.508000 audit[6016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff976f590 a2=0 a3=1 items=0 ppid=3827 pid=6016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:57.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:57.514000 audit[6016]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:57.514000 audit[6016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff976f590 a2=0 a3=1 items=0 ppid=3827 pid=6016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:57.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:57.528000 audit[6018]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=6018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:57.528000 audit[6018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffddc5a390 a2=0 a3=1 items=0 ppid=3827 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:57.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:57.531000 audit[6018]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:18:57.531000 audit[6018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffddc5a390 a2=0 a3=1 items=0 ppid=3827 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:57.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:18:57.554910 sshd[6005]: Connection closed by 10.200.16.10 port 51186 Dec 16 12:18:57.554288 sshd-session[6001]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:57.555000 audit[6001]: USER_END pid=6001 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:57.555000 audit[6001]: CRED_DISP pid=6001 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:57.558122 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:51186.service: Deactivated successfully. Dec 16 12:18:57.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.11:22-10.200.16.10:51186 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:57.560992 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:18:57.562521 systemd-logind[2058]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:18:57.565720 systemd-logind[2058]: Removed session 21. Dec 16 12:18:57.641514 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:51202.service - OpenSSH per-connection server daemon (10.200.16.10:51202). Dec 16 12:18:57.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.11:22-10.200.16.10:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:58.069000 audit[6023]: USER_ACCT pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.069735 sshd[6023]: Accepted publickey for core from 10.200.16.10 port 51202 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:58.070000 audit[6023]: CRED_ACQ pid=6023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.070000 audit[6023]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe54bbd50 a2=3 a3=0 items=0 ppid=1 pid=6023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:58.070000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:58.071083 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:58.075290 systemd-logind[2058]: New session 22 of user core. Dec 16 12:18:58.081614 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:18:58.085000 audit[6023]: USER_START pid=6023 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.087000 audit[6027]: CRED_ACQ pid=6027 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.448073 sshd[6027]: Connection closed by 10.200.16.10 port 51202 Dec 16 12:18:58.447918 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:58.450000 audit[6023]: USER_END pid=6023 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.450000 audit[6023]: CRED_DISP pid=6023 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.453622 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:51202.service: Deactivated successfully. Dec 16 12:18:58.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.11:22-10.200.16.10:51202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:58.456602 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:18:58.458502 systemd-logind[2058]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:18:58.461223 systemd-logind[2058]: Removed session 22. Dec 16 12:18:58.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.11:22-10.200.16.10:51206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:58.535658 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:51206.service - OpenSSH per-connection server daemon (10.200.16.10:51206). Dec 16 12:18:58.958000 audit[6037]: USER_ACCT pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.959481 sshd[6037]: Accepted publickey for core from 10.200.16.10 port 51206 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:18:58.959000 audit[6037]: CRED_ACQ pid=6037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.959000 audit[6037]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecdccbf0 a2=3 a3=0 items=0 ppid=1 pid=6037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:18:58.959000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:18:58.960687 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:18:58.968404 systemd-logind[2058]: New session 23 of user core. Dec 16 12:18:58.972691 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:18:58.975000 audit[6037]: USER_START pid=6037 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:58.977000 audit[6041]: CRED_ACQ pid=6041 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:59.235812 sshd[6041]: Connection closed by 10.200.16.10 port 51206 Dec 16 12:18:59.234197 sshd-session[6037]: pam_unix(sshd:session): session closed for user core Dec 16 12:18:59.235000 audit[6037]: USER_END pid=6037 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:59.235000 audit[6037]: CRED_DISP pid=6037 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:18:59.239130 systemd-logind[2058]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:18:59.239758 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:51206.service: Deactivated successfully. Dec 16 12:18:59.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.11:22-10.200.16.10:51206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:18:59.242015 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:18:59.243543 systemd-logind[2058]: Removed session 23. Dec 16 12:19:00.985008 kubelet[3679]: E1216 12:19:00.984855 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:19:01.983491 kubelet[3679]: E1216 12:19:01.983353 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:19:01.983996 kubelet[3679]: E1216 12:19:01.983953 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:19:02.546000 audit[6053]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=6053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:02.551244 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:19:02.551300 kernel: audit: type=1325 audit(1765887542.546:894): table=filter:151 family=2 entries=26 op=nft_register_rule pid=6053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:02.546000 audit[6053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff82a92f0 a2=0 a3=1 items=0 ppid=3827 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:02.577625 kernel: audit: type=1300 audit(1765887542.546:894): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff82a92f0 a2=0 a3=1 items=0 ppid=3827 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:02.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:02.586551 kernel: audit: type=1327 audit(1765887542.546:894): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:02.579000 audit[6053]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=6053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:02.595352 kernel: audit: type=1325 audit(1765887542.579:895): table=nat:152 family=2 entries=104 op=nft_register_chain pid=6053 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:19:02.579000 audit[6053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff82a92f0 a2=0 a3=1 items=0 ppid=3827 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:02.614696 kernel: audit: type=1300 audit(1765887542.579:895): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff82a92f0 a2=0 a3=1 items=0 ppid=3827 pid=6053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:02.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:02.623438 kernel: audit: type=1327 audit(1765887542.579:895): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:19:02.985346 kubelet[3679]: E1216 12:19:02.985239 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:19:03.984553 kubelet[3679]: E1216 12:19:03.984510 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:19:04.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.11:22-10.200.16.10:59664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:04.326514 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:59664.service - OpenSSH per-connection server daemon (10.200.16.10:59664). Dec 16 12:19:04.341531 kernel: audit: type=1130 audit(1765887544.325:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.11:22-10.200.16.10:59664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:04.755000 audit[6055]: USER_ACCT pid=6055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:04.757638 sshd[6055]: Accepted publickey for core from 10.200.16.10 port 59664 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:19:04.773555 kernel: audit: type=1101 audit(1765887544.755:897): pid=6055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:04.773638 kernel: audit: type=1103 audit(1765887544.771:898): pid=6055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:04.771000 audit[6055]: CRED_ACQ pid=6055 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:04.774294 sshd-session[6055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:04.796489 kernel: audit: type=1006 audit(1765887544.771:899): pid=6055 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:19:04.771000 audit[6055]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe4bd4cc0 a2=3 a3=0 items=0 ppid=1 pid=6055 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:04.771000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:04.800712 systemd-logind[2058]: New session 24 of user core. Dec 16 12:19:04.804618 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:19:04.807000 audit[6055]: USER_START pid=6055 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:04.808000 audit[6059]: CRED_ACQ pid=6059 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:05.033654 sshd[6059]: Connection closed by 10.200.16.10 port 59664 Dec 16 12:19:05.034674 sshd-session[6055]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:05.035000 audit[6055]: USER_END pid=6055 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:05.035000 audit[6055]: CRED_DISP pid=6055 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:05.039005 systemd-logind[2058]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:19:05.039695 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:59664.service: Deactivated successfully. Dec 16 12:19:05.041221 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:19:05.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.11:22-10.200.16.10:59664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:05.045804 systemd-logind[2058]: Removed session 24. Dec 16 12:19:08.984275 kubelet[3679]: E1216 12:19:08.983949 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:19:10.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.11:22-10.200.16.10:49314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:10.135675 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:49314.service - OpenSSH per-connection server daemon (10.200.16.10:49314). Dec 16 12:19:10.138660 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:19:10.138721 kernel: audit: type=1130 audit(1765887550.134:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.11:22-10.200.16.10:49314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:10.582000 audit[6072]: USER_ACCT pid=6072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.599780 sshd-session[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:10.600427 sshd[6072]: Accepted publickey for core from 10.200.16.10 port 49314 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:19:10.597000 audit[6072]: CRED_ACQ pid=6072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.616729 kernel: audit: type=1101 audit(1765887550.582:906): pid=6072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.616791 kernel: audit: type=1103 audit(1765887550.597:907): pid=6072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.627981 kernel: audit: type=1006 audit(1765887550.597:908): pid=6072 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:19:10.629792 systemd-logind[2058]: New session 25 of user core. Dec 16 12:19:10.597000 audit[6072]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc75ed660 a2=3 a3=0 items=0 ppid=1 pid=6072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:10.645596 kernel: audit: type=1300 audit(1765887550.597:908): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc75ed660 a2=3 a3=0 items=0 ppid=1 pid=6072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:10.597000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:10.652476 kernel: audit: type=1327 audit(1765887550.597:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:10.654589 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:19:10.656000 audit[6072]: USER_START pid=6072 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.657000 audit[6076]: CRED_ACQ pid=6076 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.691749 kernel: audit: type=1105 audit(1765887550.656:909): pid=6072 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.691820 kernel: audit: type=1103 audit(1765887550.657:910): pid=6076 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.874338 sshd[6076]: Connection closed by 10.200.16.10 port 49314 Dec 16 12:19:10.875108 sshd-session[6072]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:10.875000 audit[6072]: USER_END pid=6072 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.879095 systemd-logind[2058]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:19:10.881467 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:49314.service: Deactivated successfully. Dec 16 12:19:10.884108 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:19:10.886072 systemd-logind[2058]: Removed session 25. Dec 16 12:19:10.875000 audit[6072]: CRED_DISP pid=6072 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.910536 kernel: audit: type=1106 audit(1765887550.875:911): pid=6072 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.910620 kernel: audit: type=1104 audit(1765887550.875:912): pid=6072 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:10.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.11:22-10.200.16.10:49314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:14.985162 kubelet[3679]: E1216 12:19:14.985107 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:19:14.987049 kubelet[3679]: E1216 12:19:14.985536 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:19:15.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.11:22-10.200.16.10:49316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:15.965320 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:49316.service - OpenSSH per-connection server daemon (10.200.16.10:49316). Dec 16 12:19:15.969061 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:15.969165 kernel: audit: type=1130 audit(1765887555.964:914): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.11:22-10.200.16.10:49316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:15.986562 kubelet[3679]: E1216 12:19:15.986532 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:19:16.406140 sshd[6091]: Accepted publickey for core from 10.200.16.10 port 49316 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:19:16.404000 audit[6091]: USER_ACCT pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.414367 sshd-session[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:16.411000 audit[6091]: CRED_ACQ pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.429751 systemd-logind[2058]: New session 26 of user core. Dec 16 12:19:16.437739 kernel: audit: type=1101 audit(1765887556.404:915): pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.437805 kernel: audit: type=1103 audit(1765887556.411:916): pid=6091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.446846 kernel: audit: type=1006 audit(1765887556.411:917): pid=6091 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 12:19:16.411000 audit[6091]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc484cf90 a2=3 a3=0 items=0 ppid=1 pid=6091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:16.462548 kernel: audit: type=1300 audit(1765887556.411:917): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc484cf90 a2=3 a3=0 items=0 ppid=1 pid=6091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:16.411000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:16.463914 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:19:16.469243 kernel: audit: type=1327 audit(1765887556.411:917): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:16.469000 audit[6091]: USER_START pid=6091 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.491000 audit[6095]: CRED_ACQ pid=6095 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.509081 kernel: audit: type=1105 audit(1765887556.469:918): pid=6091 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.509146 kernel: audit: type=1103 audit(1765887556.491:919): pid=6095 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.702478 sshd[6095]: Connection closed by 10.200.16.10 port 49316 Dec 16 12:19:16.702220 sshd-session[6091]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:16.702000 audit[6091]: USER_END pid=6091 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.707551 systemd-logind[2058]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:19:16.708811 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:49316.service: Deactivated successfully. Dec 16 12:19:16.711686 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:19:16.714346 systemd-logind[2058]: Removed session 26. Dec 16 12:19:16.702000 audit[6091]: CRED_DISP pid=6091 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.738333 kernel: audit: type=1106 audit(1765887556.702:920): pid=6091 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.738402 kernel: audit: type=1104 audit(1765887556.702:921): pid=6091 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:16.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.11:22-10.200.16.10:49316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:16.986873 kubelet[3679]: E1216 12:19:16.985914 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:19:18.984803 kubelet[3679]: E1216 12:19:18.984756 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:19:21.777848 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:35528.service - OpenSSH per-connection server daemon (10.200.16.10:35528). Dec 16 12:19:21.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.11:22-10.200.16.10:35528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:21.781983 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:21.782049 kernel: audit: type=1130 audit(1765887561.777:923): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.11:22-10.200.16.10:35528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:22.190000 audit[6132]: USER_ACCT pid=6132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.208106 sshd[6132]: Accepted publickey for core from 10.200.16.10 port 35528 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:19:22.207874 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:22.206000 audit[6132]: CRED_ACQ pid=6132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.223917 kernel: audit: type=1101 audit(1765887562.190:924): pid=6132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.224116 kernel: audit: type=1103 audit(1765887562.206:925): pid=6132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.233465 kernel: audit: type=1006 audit(1765887562.206:926): pid=6132 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 12:19:22.206000 audit[6132]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc0c1c60 a2=3 a3=0 items=0 ppid=1 pid=6132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:22.250897 kernel: audit: type=1300 audit(1765887562.206:926): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc0c1c60 a2=3 a3=0 items=0 ppid=1 pid=6132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:22.206000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:22.256541 kernel: audit: type=1327 audit(1765887562.206:926): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:22.258377 systemd-logind[2058]: New session 27 of user core. Dec 16 12:19:22.263565 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:19:22.265000 audit[6132]: USER_START pid=6132 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.267000 audit[6136]: CRED_ACQ pid=6136 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.298842 kernel: audit: type=1105 audit(1765887562.265:927): pid=6132 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.298908 kernel: audit: type=1103 audit(1765887562.267:928): pid=6136 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.453714 sshd[6136]: Connection closed by 10.200.16.10 port 35528 Dec 16 12:19:22.454257 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:22.455000 audit[6132]: USER_END pid=6132 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.476919 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:35528.service: Deactivated successfully. Dec 16 12:19:22.478427 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:19:22.481223 systemd-logind[2058]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:19:22.455000 audit[6132]: CRED_DISP pid=6132 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.495468 kernel: audit: type=1106 audit(1765887562.455:929): pid=6132 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.496202 kernel: audit: type=1104 audit(1765887562.455:930): pid=6132 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:22.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.11:22-10.200.16.10:35528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:22.497754 systemd-logind[2058]: Removed session 27. Dec 16 12:19:23.984675 kubelet[3679]: E1216 12:19:23.984628 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:19:25.984684 kubelet[3679]: E1216 12:19:25.984609 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1" Dec 16 12:19:26.984769 kubelet[3679]: E1216 12:19:26.984205 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-hb2wg" podUID="4bd414c4-3198-4659-bd4d-34927e106bf1" Dec 16 12:19:27.546116 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:35544.service - OpenSSH per-connection server daemon (10.200.16.10:35544). Dec 16 12:19:27.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.11:22-10.200.16.10:35544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:27.551512 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:27.551592 kernel: audit: type=1130 audit(1765887567.546:932): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.11:22-10.200.16.10:35544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:27.967000 audit[6153]: USER_ACCT pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:27.968075 sshd[6153]: Accepted publickey for core from 10.200.16.10 port 35544 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:19:27.984795 kubelet[3679]: E1216 12:19:27.984601 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-db96bddb4-wqn8x" podUID="e5dc32c3-fc1f-40c7-8b8d-03d6cdb864e9" Dec 16 12:19:27.989263 sshd-session[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:27.988000 audit[6153]: CRED_ACQ pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.008529 kernel: audit: type=1101 audit(1765887567.967:933): pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.008585 kernel: audit: type=1103 audit(1765887567.988:934): pid=6153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.013604 systemd-logind[2058]: New session 28 of user core. Dec 16 12:19:28.023119 kernel: audit: type=1006 audit(1765887567.988:935): pid=6153 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 12:19:28.023017 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 12:19:27.988000 audit[6153]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1aadce0 a2=3 a3=0 items=0 ppid=1 pid=6153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:28.051304 kernel: audit: type=1300 audit(1765887567.988:935): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1aadce0 a2=3 a3=0 items=0 ppid=1 pid=6153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:28.051368 kernel: audit: type=1327 audit(1765887567.988:935): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:27.988000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:28.042000 audit[6153]: USER_START pid=6153 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.075565 kernel: audit: type=1105 audit(1765887568.042:936): pid=6153 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.043000 audit[6157]: CRED_ACQ pid=6157 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.090269 kernel: audit: type=1103 audit(1765887568.043:937): pid=6157 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.259283 sshd[6157]: Connection closed by 10.200.16.10 port 35544 Dec 16 12:19:28.259761 sshd-session[6153]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:28.261000 audit[6153]: USER_END pid=6153 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.280255 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:35544.service: Deactivated successfully. Dec 16 12:19:28.262000 audit[6153]: CRED_DISP pid=6153 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.295228 kernel: audit: type=1106 audit(1765887568.261:938): pid=6153 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.295285 kernel: audit: type=1104 audit(1765887568.262:939): pid=6153 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:28.284107 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 12:19:28.285756 systemd-logind[2058]: Session 28 logged out. Waiting for processes to exit. Dec 16 12:19:28.289804 systemd-logind[2058]: Removed session 28. Dec 16 12:19:28.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.11:22-10.200.16.10:35544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:28.985398 kubelet[3679]: E1216 12:19:28.984703 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gqbjj" podUID="2ce795cb-7cc9-42af-81d6-cb34fd295931" Dec 16 12:19:30.985277 kubelet[3679]: E1216 12:19:30.985069 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-f8d759d65-5qwgg" podUID="a5a3ab38-4175-48e7-a44e-56f2f099ecc8" Dec 16 12:19:33.355027 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:19:33.355131 kernel: audit: type=1130 audit(1765887573.350:941): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.11:22-10.200.16.10:37320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:33.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.11:22-10.200.16.10:37320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:33.351689 systemd[1]: Started sshd@25-10.200.20.11:22-10.200.16.10:37320.service - OpenSSH per-connection server daemon (10.200.16.10:37320). Dec 16 12:19:33.786000 audit[6170]: USER_ACCT pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:33.794791 sshd-session[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:19:33.805007 sshd[6170]: Accepted publickey for core from 10.200.16.10 port 37320 ssh2: RSA SHA256:6TqG8cVOUuPDlQvJsacCPlsjkRbKCUCPWxTlcwhSOqg Dec 16 12:19:33.791000 audit[6170]: CRED_ACQ pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:33.820958 kernel: audit: type=1101 audit(1765887573.786:942): pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:33.821021 kernel: audit: type=1103 audit(1765887573.791:943): pid=6170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:33.811247 systemd-logind[2058]: New session 29 of user core. Dec 16 12:19:33.830835 kernel: audit: type=1006 audit(1765887573.791:944): pid=6170 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 16 12:19:33.791000 audit[6170]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda1ad140 a2=3 a3=0 items=0 ppid=1 pid=6170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:33.848521 kernel: audit: type=1300 audit(1765887573.791:944): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda1ad140 a2=3 a3=0 items=0 ppid=1 pid=6170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:19:33.791000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:33.848650 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 12:19:33.854954 kernel: audit: type=1327 audit(1765887573.791:944): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:19:33.852000 audit[6170]: USER_START pid=6170 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:33.875045 kernel: audit: type=1105 audit(1765887573.852:945): pid=6170 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:33.874000 audit[6174]: CRED_ACQ pid=6174 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:33.892386 kernel: audit: type=1103 audit(1765887573.874:946): pid=6174 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:34.105471 sshd[6174]: Connection closed by 10.200.16.10 port 37320 Dec 16 12:19:34.105966 sshd-session[6170]: pam_unix(sshd:session): session closed for user core Dec 16 12:19:34.106000 audit[6170]: USER_END pid=6170 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:34.110949 systemd-logind[2058]: Session 29 logged out. Waiting for processes to exit. Dec 16 12:19:34.112494 systemd[1]: sshd@25-10.200.20.11:22-10.200.16.10:37320.service: Deactivated successfully. Dec 16 12:19:34.116299 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 12:19:34.118399 systemd-logind[2058]: Removed session 29. Dec 16 12:19:34.106000 audit[6170]: CRED_DISP pid=6170 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:34.141533 kernel: audit: type=1106 audit(1765887574.106:947): pid=6170 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:34.141605 kernel: audit: type=1104 audit(1765887574.106:948): pid=6170 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:19:34.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.11:22-10.200.16.10:37320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:19:34.983713 kubelet[3679]: E1216 12:19:34.983628 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d6ff77cd6-97mm5" podUID="9cbc393d-9eb5-4eab-a130-293205187b74" Dec 16 12:19:36.984928 kubelet[3679]: E1216 12:19:36.984630 3679 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mk7sc" podUID="03762c75-9df3-49a2-9166-fb2b4578d7a1"