Dec 16 12:44:33.028997 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:44:33.029018 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 16 12:44:33.029025 kernel: KASLR enabled Dec 16 12:44:33.029029 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:44:33.029034 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:44:33.029038 kernel: efi: EFI v2.7 by EDK II Dec 16 12:44:33.029044 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:44:33.029048 kernel: random: crng init done Dec 16 12:44:33.029052 kernel: secureboot: Secure boot disabled Dec 16 12:44:33.029056 kernel: ACPI: Early table checksum verification disabled Dec 16 12:44:33.029060 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:44:33.029064 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029068 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029074 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:44:33.029093 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029098 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029102 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029109 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029113 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029117 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029122 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:44:33.029127 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:44:33.029131 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:44:33.029135 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:44:33.029140 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:44:33.029144 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:44:33.029149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:44:33.029154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:44:33.029159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:44:33.029163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:44:33.029168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:44:33.029172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:44:33.029177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:44:33.029181 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:44:33.029185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:44:33.029190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:44:33.029195 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:44:33.029199 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:44:33.029205 kernel: Zone ranges: Dec 16 12:44:33.029209 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:44:33.029216 kernel: DMA32 empty Dec 16 12:44:33.029220 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:44:33.029225 kernel: Device empty Dec 16 12:44:33.029231 kernel: Movable zone start for each node Dec 16 12:44:33.029235 kernel: Early memory node ranges Dec 16 12:44:33.029240 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:44:33.029245 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:44:33.029249 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:44:33.029254 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:44:33.029259 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:44:33.029263 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:44:33.029268 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:44:33.029273 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:44:33.029278 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:44:33.029283 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:44:33.029288 kernel: psci: probing for conduit method from ACPI. Dec 16 12:44:33.029292 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:44:33.029297 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:44:33.029302 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:44:33.029306 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:44:33.029311 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:44:33.029315 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:44:33.029320 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:44:33.029325 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:44:33.029331 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:44:33.029335 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:44:33.029340 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:44:33.029345 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:44:33.029349 kernel: CPU features: detected: Spectre-v4 Dec 16 12:44:33.029354 kernel: CPU features: detected: Spectre-BHB Dec 16 12:44:33.029359 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:44:33.029363 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:44:33.029368 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:44:33.029373 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:44:33.029378 kernel: alternatives: applying boot alternatives Dec 16 12:44:33.029384 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:44:33.029389 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:44:33.029394 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:44:33.029398 kernel: Fallback order for Node 0: 0 Dec 16 12:44:33.029403 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:44:33.029408 kernel: Policy zone: Normal Dec 16 12:44:33.029412 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:44:33.029417 kernel: software IO TLB: area num 2. Dec 16 12:44:33.029422 kernel: software IO TLB: mapped [mem 0x0000000037380000-0x000000003b380000] (64MB) Dec 16 12:44:33.029426 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:44:33.029432 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:44:33.029438 kernel: rcu: RCU event tracing is enabled. Dec 16 12:44:33.029442 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:44:33.029447 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:44:33.029452 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:44:33.029456 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:44:33.029461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:44:33.029466 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:44:33.029471 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:44:33.029475 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:44:33.029480 kernel: GICv3: 960 SPIs implemented Dec 16 12:44:33.029485 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:44:33.029490 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:44:33.029495 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:44:33.029499 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:44:33.029504 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:44:33.029509 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:44:33.029514 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:44:33.029518 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:44:33.029523 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:44:33.029528 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:44:33.029533 kernel: Console: colour dummy device 80x25 Dec 16 12:44:33.029539 kernel: printk: legacy console [tty1] enabled Dec 16 12:44:33.029544 kernel: ACPI: Core revision 20240827 Dec 16 12:44:33.029549 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:44:33.029554 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:44:33.029559 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:44:33.029564 kernel: landlock: Up and running. Dec 16 12:44:33.029569 kernel: SELinux: Initializing. Dec 16 12:44:33.029575 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:44:33.029580 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:44:33.029585 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:44:33.029590 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:44:33.029598 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:44:33.029604 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:44:33.029610 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:44:33.029615 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:44:33.029620 kernel: Remapping and enabling EFI services. Dec 16 12:44:33.029626 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:44:33.029631 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:44:33.029637 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:44:33.029642 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:44:33.029648 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:44:33.029653 kernel: SMP: Total of 2 processors activated. Dec 16 12:44:33.029659 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:44:33.029664 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:44:33.029669 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:44:33.029675 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:44:33.029680 kernel: CPU features: detected: Common not Private translations Dec 16 12:44:33.029686 kernel: CPU features: detected: CRC32 instructions Dec 16 12:44:33.029691 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:44:33.029696 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:44:33.029701 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:44:33.029707 kernel: CPU features: detected: Privileged Access Never Dec 16 12:44:33.029712 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:44:33.029717 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:44:33.029723 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:44:33.029728 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:44:33.029733 kernel: alternatives: applying system-wide alternatives Dec 16 12:44:33.029739 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:44:33.029744 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:44:33.029749 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:44:33.029755 kernel: Memory: 3979964K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 193008K reserved, 16384K cma-reserved) Dec 16 12:44:33.029761 kernel: devtmpfs: initialized Dec 16 12:44:33.029766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:44:33.029771 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:44:33.029776 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:44:33.029781 kernel: 0 pages in range for non-PLT usage Dec 16 12:44:33.029787 kernel: 515184 pages in range for PLT usage Dec 16 12:44:33.029792 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:44:33.029798 kernel: SMBIOS 3.1.0 present. Dec 16 12:44:33.029803 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:44:33.029808 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:44:33.029813 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:44:33.029819 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:44:33.029824 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:44:33.029829 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:44:33.029834 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:44:33.029840 kernel: audit: type=2000 audit(0.060:1): state=initialized audit_enabled=0 res=1 Dec 16 12:44:33.029845 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:44:33.029851 kernel: cpuidle: using governor menu Dec 16 12:44:33.029856 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:44:33.029861 kernel: ASID allocator initialised with 32768 entries Dec 16 12:44:33.029866 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:44:33.029871 kernel: Serial: AMBA PL011 UART driver Dec 16 12:44:33.029877 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:44:33.029882 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:44:33.029887 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:44:33.029893 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:44:33.029898 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:44:33.029903 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:44:33.029908 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:44:33.029914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:44:33.029919 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:44:33.029925 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:44:33.029930 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:44:33.029935 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:44:33.029940 kernel: ACPI: Interpreter enabled Dec 16 12:44:33.029945 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:44:33.029951 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:44:33.029957 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:44:33.029962 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:44:33.029967 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:44:33.029972 kernel: ACPI: CPU0 has been hot-added Dec 16 12:44:33.029977 kernel: ACPI: CPU1 has been hot-added Dec 16 12:44:33.029982 kernel: iommu: Default domain type: Translated Dec 16 12:44:33.029989 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:44:33.029994 kernel: efivars: Registered efivars operations Dec 16 12:44:33.029999 kernel: vgaarb: loaded Dec 16 12:44:33.030004 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:44:33.030009 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:44:33.030015 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:44:33.030020 kernel: pnp: PnP ACPI init Dec 16 12:44:33.030026 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:44:33.030031 kernel: NET: Registered PF_INET protocol family Dec 16 12:44:33.030036 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:44:33.030041 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:44:33.030046 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:44:33.030052 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:44:33.030057 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:44:33.030063 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:44:33.030068 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:44:33.030074 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:44:33.030086 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:44:33.030091 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:44:33.030097 kernel: kvm [1]: HYP mode not available Dec 16 12:44:33.030102 kernel: Initialise system trusted keyrings Dec 16 12:44:33.030107 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:44:33.030113 kernel: Key type asymmetric registered Dec 16 12:44:33.030118 kernel: Asymmetric key parser 'x509' registered Dec 16 12:44:33.030123 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:44:33.030128 kernel: io scheduler mq-deadline registered Dec 16 12:44:33.030133 kernel: io scheduler kyber registered Dec 16 12:44:33.030138 kernel: io scheduler bfq registered Dec 16 12:44:33.030144 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:44:33.030150 kernel: thunder_xcv, ver 1.0 Dec 16 12:44:33.030155 kernel: thunder_bgx, ver 1.0 Dec 16 12:44:33.030160 kernel: nicpf, ver 1.0 Dec 16 12:44:33.030165 kernel: nicvf, ver 1.0 Dec 16 12:44:33.030317 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:44:33.030385 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:44:29 UTC (1765889069) Dec 16 12:44:33.030394 kernel: efifb: probing for efifb Dec 16 12:44:33.030399 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:44:33.030404 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:44:33.030410 kernel: efifb: scrolling: redraw Dec 16 12:44:33.030415 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:44:33.030420 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:44:33.030425 kernel: fb0: EFI VGA frame buffer device Dec 16 12:44:33.030431 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:44:33.030437 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:44:33.030442 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:44:33.030447 kernel: watchdog: NMI not fully supported Dec 16 12:44:33.030452 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:44:33.030458 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:44:33.030463 kernel: Segment Routing with IPv6 Dec 16 12:44:33.030469 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:44:33.030474 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:44:33.030479 kernel: Key type dns_resolver registered Dec 16 12:44:33.030484 kernel: registered taskstats version 1 Dec 16 12:44:33.030489 kernel: Loading compiled-in X.509 certificates Dec 16 12:44:33.030495 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 16 12:44:33.030500 kernel: Demotion targets for Node 0: null Dec 16 12:44:33.030506 kernel: Key type .fscrypt registered Dec 16 12:44:33.030511 kernel: Key type fscrypt-provisioning registered Dec 16 12:44:33.030516 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:44:33.030521 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:44:33.030526 kernel: ima: No architecture policies found Dec 16 12:44:33.030531 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:44:33.030537 kernel: clk: Disabling unused clocks Dec 16 12:44:33.030542 kernel: PM: genpd: Disabling unused power domains Dec 16 12:44:33.030548 kernel: Freeing unused kernel memory: 12416K Dec 16 12:44:33.030553 kernel: Run /init as init process Dec 16 12:44:33.030558 kernel: with arguments: Dec 16 12:44:33.030563 kernel: /init Dec 16 12:44:33.030568 kernel: with environment: Dec 16 12:44:33.030573 kernel: HOME=/ Dec 16 12:44:33.030578 kernel: TERM=linux Dec 16 12:44:33.030584 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:44:33.030589 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:44:33.030595 kernel: SCSI subsystem initialized Dec 16 12:44:33.030600 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:44:33.030688 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:44:33.030695 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:44:33.030702 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:44:33.030707 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:44:33.030713 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:44:33.030718 kernel: PTP clock support registered Dec 16 12:44:33.030723 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:44:33.030729 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:44:33.030734 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:44:33.030740 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:44:33.030745 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:44:33.030750 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:44:33.030846 kernel: scsi host1: storvsc_host_t Dec 16 12:44:33.030926 kernel: scsi host0: storvsc_host_t Dec 16 12:44:33.031014 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:44:33.031140 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:44:33.031222 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:44:33.031295 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:44:33.031368 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 12:44:33.031441 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:44:33.031513 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:44:33.031597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#61 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:44:33.031664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#4 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:44:33.031671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:44:33.031742 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 12:44:33.031816 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 12:44:33.031824 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:44:33.031898 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:44:33.031904 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:44:33.031910 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:44:33.031915 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:44:33.031920 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:44:33.031926 kernel: raid6: neonx8 gen() 18511 MB/s Dec 16 12:44:33.031932 kernel: raid6: neonx4 gen() 18546 MB/s Dec 16 12:44:33.031938 kernel: raid6: neonx2 gen() 17094 MB/s Dec 16 12:44:33.031943 kernel: raid6: neonx1 gen() 15082 MB/s Dec 16 12:44:33.031948 kernel: raid6: int64x8 gen() 10564 MB/s Dec 16 12:44:33.031953 kernel: raid6: int64x4 gen() 10612 MB/s Dec 16 12:44:33.031958 kernel: raid6: int64x2 gen() 8998 MB/s Dec 16 12:44:33.031963 kernel: raid6: int64x1 gen() 7026 MB/s Dec 16 12:44:33.031970 kernel: raid6: using algorithm neonx4 gen() 18546 MB/s Dec 16 12:44:33.031975 kernel: raid6: .... xor() 15141 MB/s, rmw enabled Dec 16 12:44:33.031980 kernel: raid6: using neon recovery algorithm Dec 16 12:44:33.031985 kernel: xor: measuring software checksum speed Dec 16 12:44:33.031990 kernel: 8regs : 28653 MB/sec Dec 16 12:44:33.031996 kernel: 32regs : 28733 MB/sec Dec 16 12:44:33.032001 kernel: arm64_neon : 37378 MB/sec Dec 16 12:44:33.032006 kernel: xor: using function: arm64_neon (37378 MB/sec) Dec 16 12:44:33.032013 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:44:33.032018 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (433) Dec 16 12:44:33.032024 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 16 12:44:33.032029 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:33.032034 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:44:33.032040 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:44:33.032045 kernel: loop: module loaded Dec 16 12:44:33.032051 kernel: loop0: detected capacity change from 0 to 91480 Dec 16 12:44:33.032056 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:44:33.032063 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:44:33.032071 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:44:33.032077 systemd[1]: Detected virtualization microsoft. Dec 16 12:44:33.032105 systemd[1]: Detected architecture arm64. Dec 16 12:44:33.032112 systemd[1]: Running in initrd. Dec 16 12:44:33.032117 systemd[1]: No hostname configured, using default hostname. Dec 16 12:44:33.032123 systemd[1]: Hostname set to . Dec 16 12:44:33.032129 systemd[1]: Initializing machine ID from random generator. Dec 16 12:44:33.032134 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:44:33.032140 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:44:33.032147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:33.032153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:33.032159 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:44:33.032165 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:44:33.032172 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:44:33.032177 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:44:33.032184 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:33.032190 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:33.032196 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:44:33.032202 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:44:33.032207 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:44:33.032213 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:44:33.032220 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:44:33.032226 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:44:33.032231 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:44:33.032237 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:33.032243 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:44:33.032248 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:44:33.032254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:33.032266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:33.032273 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:33.032279 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:44:33.032285 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:44:33.032291 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:44:33.032299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:44:33.032305 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:44:33.032311 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:44:33.032317 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:44:33.032323 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:44:33.032329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:44:33.032355 systemd-journald[570]: Collecting audit messages is enabled. Dec 16 12:44:33.032373 systemd-journald[570]: Journal started Dec 16 12:44:33.032387 systemd-journald[570]: Runtime Journal (/run/log/journal/a3bc309252354d4192e3b8b0ccd6cf68) is 8M, max 78.3M, 70.3M free. Dec 16 12:44:33.052194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:33.071233 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:44:33.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.072106 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:44:33.108840 kernel: audit: type=1130 audit(1765889073.070:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.108879 kernel: audit: type=1130 audit(1765889073.096:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.098318 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:33.137573 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:44:33.137595 kernel: audit: type=1130 audit(1765889073.120:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.121351 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:44:33.156250 kernel: audit: type=1130 audit(1765889073.140:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.143182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:44:33.171660 kernel: Bridge firewalling registered Dec 16 12:44:33.165788 systemd-modules-load[573]: Inserted module 'br_netfilter' Dec 16 12:44:33.177233 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:44:33.191543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:33.211791 kernel: audit: type=1130 audit(1765889073.195:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.212024 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:44:33.243549 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:33.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.250302 systemd-tmpfiles[583]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:44:33.275650 kernel: audit: type=1130 audit(1765889073.252:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.268204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:33.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.284966 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:44:33.302962 kernel: audit: type=1130 audit(1765889073.280:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.323102 kernel: audit: type=1130 audit(1765889073.307:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.321151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:33.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.328713 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:44:33.358580 kernel: audit: type=1130 audit(1765889073.325:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.363000 audit: BPF prog-id=6 op=LOAD Dec 16 12:44:33.366260 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:44:33.389201 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:44:33.410335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:44:33.424804 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:44:33.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.439525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:33.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.497056 dracut-cmdline[610]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 16 12:44:33.528072 systemd-resolved[596]: Positive Trust Anchors: Dec 16 12:44:33.528101 systemd-resolved[596]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:44:33.528104 systemd-resolved[596]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:44:33.528123 systemd-resolved[596]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:44:33.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.544820 systemd-resolved[596]: Defaulting to hostname 'linux'. Dec 16 12:44:33.545588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:44:33.551465 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:33.687110 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:44:33.726107 kernel: iscsi: registered transport (tcp) Dec 16 12:44:33.757051 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:44:33.757125 kernel: QLogic iSCSI HBA Driver Dec 16 12:44:33.819538 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:44:33.842215 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:33.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.848003 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:44:33.901142 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:44:33.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.907382 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:44:33.926252 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:44:33.953805 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:44:33.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:33.962000 audit: BPF prog-id=7 op=LOAD Dec 16 12:44:33.962000 audit: BPF prog-id=8 op=LOAD Dec 16 12:44:33.964111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:34.045345 systemd-udevd[838]: Using default interface naming scheme 'v257'. Dec 16 12:44:34.046816 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:44:34.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.064921 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:34.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.071788 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:44:34.095000 audit: BPF prog-id=9 op=LOAD Dec 16 12:44:34.098268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:44:34.107553 dracut-pre-trigger[947]: rd.md=0: removing MD RAID activation Dec 16 12:44:34.138442 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:44:34.167102 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 12:44:34.167154 kernel: audit: type=1130 audit(1765889074.149:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.157948 systemd-networkd[948]: lo: Link UP Dec 16 12:44:34.157952 systemd-networkd[948]: lo: Gained carrier Dec 16 12:44:34.171267 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:44:34.175890 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:44:34.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.191562 systemd[1]: Reached target network.target - Network. Dec 16 12:44:34.215095 kernel: audit: type=1130 audit(1765889074.190:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.223547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:34.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.247137 kernel: audit: type=1130 audit(1765889074.232:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.280655 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:44:34.328219 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:44:34.339493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:34.344972 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:34.372777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:44:34.373024 kernel: audit: type=1131 audit(1765889074.356:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.357273 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:34.385327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:34.401603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:34.414324 kernel: hv_netvsc 000d3ac6-13bc-000d-3ac6-13bc000d3ac6 eth0: VF slot 1 added Dec 16 12:44:34.401720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:34.427175 systemd-networkd[948]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:34.469199 kernel: audit: type=1130 audit(1765889074.436:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.469227 kernel: audit: type=1131 audit(1765889074.436:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.469235 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:44:34.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.427189 systemd-networkd[948]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:34.483959 kernel: hv_pci ae067349-9632-45ce-abed-2c899e73c5ed: PCI VMBus probing: Using version 0x10004 Dec 16 12:44:34.428654 systemd-networkd[948]: eth0: Link UP Dec 16 12:44:34.429018 systemd-networkd[948]: eth0: Gained carrier Dec 16 12:44:34.507482 kernel: hv_pci ae067349-9632-45ce-abed-2c899e73c5ed: PCI host bridge to bus 9632:00 Dec 16 12:44:34.507682 kernel: pci_bus 9632:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:44:34.507797 kernel: pci_bus 9632:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:44:34.507873 kernel: pci 9632:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:44:34.429032 systemd-networkd[948]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:34.521606 kernel: pci 9632:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:44:34.442528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:34.531388 kernel: pci 9632:00:02.0: enabling Extended Tags Dec 16 12:44:34.529909 systemd-networkd[948]: eth0: DHCPv4 address 10.200.20.49/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:44:34.558309 kernel: pci 9632:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9632:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:44:34.558554 kernel: pci_bus 9632:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:44:34.561555 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:34.589067 kernel: pci 9632:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:44:34.589327 kernel: audit: type=1130 audit(1765889074.573:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:34.750454 kernel: mlx5_core 9632:00:02.0: enabling device (0000 -> 0002) Dec 16 12:44:34.758750 kernel: mlx5_core 9632:00:02.0: PTM is not supported by PCIe Dec 16 12:44:34.759003 kernel: mlx5_core 9632:00:02.0: firmware version: 16.30.5006 Dec 16 12:44:34.796866 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:44:34.808293 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:44:34.900353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:44:34.943138 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:44:34.978886 kernel: hv_netvsc 000d3ac6-13bc-000d-3ac6-13bc000d3ac6 eth0: VF registering: eth1 Dec 16 12:44:34.979136 kernel: mlx5_core 9632:00:02.0 eth1: joined to eth0 Dec 16 12:44:34.984459 kernel: mlx5_core 9632:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:44:34.994510 systemd-networkd[948]: eth1: Interface name change detected, renamed to enP38450s1. Dec 16 12:44:34.999404 kernel: mlx5_core 9632:00:02.0 enP38450s1: renamed from eth1 Dec 16 12:44:35.022859 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:44:35.133133 kernel: mlx5_core 9632:00:02.0 enP38450s1: Link up Dec 16 12:44:35.168896 systemd-networkd[948]: enP38450s1: Link UP Dec 16 12:44:35.171964 kernel: hv_netvsc 000d3ac6-13bc-000d-3ac6-13bc000d3ac6 eth0: Data path switched to VF: enP38450s1 Dec 16 12:44:35.189688 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:44:35.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.194936 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:44:35.218611 kernel: audit: type=1130 audit(1765889075.193:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.214305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:35.223478 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:44:35.232956 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:44:35.263531 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:44:35.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.283101 kernel: audit: type=1130 audit(1765889075.271:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.430541 systemd-networkd[948]: enP38450s1: Gained carrier Dec 16 12:44:35.965569 disk-uuid[1054]: Warning: The kernel is still using the old partition table. Dec 16 12:44:35.965569 disk-uuid[1054]: The new table will be used at the next reboot or after you Dec 16 12:44:35.965569 disk-uuid[1054]: run partprobe(8) or kpartx(8) Dec 16 12:44:35.965569 disk-uuid[1054]: The operation has completed successfully. Dec 16 12:44:35.982660 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:44:35.982767 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:44:35.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.991996 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:44:36.013857 kernel: audit: type=1130 audit(1765889075.990:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:35.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:36.050125 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1221) Dec 16 12:44:36.050034 systemd-networkd[948]: eth0: Gained IPv6LL Dec 16 12:44:36.065618 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:36.065662 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:36.088585 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:36.088646 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:36.098103 kernel: BTRFS info (device sda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:36.098311 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:44:36.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:36.104526 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:44:37.020316 ignition[1240]: Ignition 2.22.0 Dec 16 12:44:37.020334 ignition[1240]: Stage: fetch-offline Dec 16 12:44:37.025007 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:44:37.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.020452 ignition[1240]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:37.034734 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:44:37.020462 ignition[1240]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:37.020539 ignition[1240]: parsed url from cmdline: "" Dec 16 12:44:37.020541 ignition[1240]: no config URL provided Dec 16 12:44:37.020544 ignition[1240]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:44:37.020550 ignition[1240]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:44:37.020554 ignition[1240]: failed to fetch config: resource requires networking Dec 16 12:44:37.020794 ignition[1240]: Ignition finished successfully Dec 16 12:44:37.072028 ignition[1247]: Ignition 2.22.0 Dec 16 12:44:37.072034 ignition[1247]: Stage: fetch Dec 16 12:44:37.072235 ignition[1247]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:37.072249 ignition[1247]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:37.072321 ignition[1247]: parsed url from cmdline: "" Dec 16 12:44:37.072324 ignition[1247]: no config URL provided Dec 16 12:44:37.072329 ignition[1247]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:44:37.072333 ignition[1247]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:44:37.072349 ignition[1247]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:44:37.146347 ignition[1247]: GET result: OK Dec 16 12:44:37.146442 ignition[1247]: config has been read from IMDS userdata Dec 16 12:44:37.146455 ignition[1247]: parsing config with SHA512: 0d40f98f95128ed6b6e62a5de9d425b487c14dd9e3ddcaa1e391cb4ce814f6c6edf204c4fe736836e81161a5189a76b56559df795c75f0a62ee9a19cd240fa79 Dec 16 12:44:37.151388 unknown[1247]: fetched base config from "system" Dec 16 12:44:37.151666 ignition[1247]: fetch: fetch complete Dec 16 12:44:37.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.151394 unknown[1247]: fetched base config from "system" Dec 16 12:44:37.151670 ignition[1247]: fetch: fetch passed Dec 16 12:44:37.151397 unknown[1247]: fetched user config from "azure" Dec 16 12:44:37.151721 ignition[1247]: Ignition finished successfully Dec 16 12:44:37.153597 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:44:37.161397 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:44:37.198707 ignition[1254]: Ignition 2.22.0 Dec 16 12:44:37.200115 ignition[1254]: Stage: kargs Dec 16 12:44:37.200363 ignition[1254]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:37.204489 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:44:37.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.200370 ignition[1254]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:37.212905 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:44:37.200931 ignition[1254]: kargs: kargs passed Dec 16 12:44:37.200982 ignition[1254]: Ignition finished successfully Dec 16 12:44:37.244119 ignition[1261]: Ignition 2.22.0 Dec 16 12:44:37.244129 ignition[1261]: Stage: disks Dec 16 12:44:37.247877 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:44:37.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.244333 ignition[1261]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:37.255477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:44:37.244340 ignition[1261]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:37.262277 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:44:37.244880 ignition[1261]: disks: disks passed Dec 16 12:44:37.270438 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:44:37.244923 ignition[1261]: Ignition finished successfully Dec 16 12:44:37.278479 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:44:37.286947 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:44:37.295755 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:44:37.429001 systemd-fsck[1270]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 12:44:37.436830 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:44:37.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:37.443652 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:44:37.865100 kernel: EXT4-fs (sda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 16 12:44:37.865413 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:44:37.869226 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:44:37.906865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:44:37.920821 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:44:37.929840 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:44:37.939564 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:44:37.940197 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:44:37.954761 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:44:37.966277 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:44:37.982115 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1284) Dec 16 12:44:37.991989 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:37.992046 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:38.003131 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:38.003235 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:38.004534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:44:38.509046 coreos-metadata[1286]: Dec 16 12:44:38.508 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:44:38.516939 coreos-metadata[1286]: Dec 16 12:44:38.516 INFO Fetch successful Dec 16 12:44:38.520889 coreos-metadata[1286]: Dec 16 12:44:38.516 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:44:38.530443 coreos-metadata[1286]: Dec 16 12:44:38.530 INFO Fetch successful Dec 16 12:44:38.545406 coreos-metadata[1286]: Dec 16 12:44:38.545 INFO wrote hostname ci-4515.1.0-a-a4975b77c5 to /sysroot/etc/hostname Dec 16 12:44:38.552296 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:44:38.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:38.753886 initrd-setup-root[1316]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:44:38.789846 initrd-setup-root[1323]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:44:38.795430 initrd-setup-root[1330]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:44:38.800560 initrd-setup-root[1337]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:44:39.705460 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:44:39.718605 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 16 12:44:39.718627 kernel: audit: type=1130 audit(1765889079.710:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:39.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:39.713559 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:44:39.748380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:44:39.782144 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:44:39.787331 kernel: BTRFS info (device sda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:39.798234 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:44:39.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:39.823248 ignition[1407]: INFO : Ignition 2.22.0 Dec 16 12:44:39.823248 ignition[1407]: INFO : Stage: mount Dec 16 12:44:39.830709 kernel: audit: type=1130 audit(1765889079.806:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:39.829151 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:44:39.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:39.849439 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:39.849439 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:39.849439 ignition[1407]: INFO : mount: mount passed Dec 16 12:44:39.849439 ignition[1407]: INFO : Ignition finished successfully Dec 16 12:44:39.872223 kernel: audit: type=1130 audit(1765889079.834:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:39.849733 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:44:39.878938 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:44:39.902105 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1416) Dec 16 12:44:39.913674 kernel: BTRFS info (device sda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 16 12:44:39.913687 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:44:39.923799 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:44:39.923817 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:44:39.925609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:44:39.960516 ignition[1433]: INFO : Ignition 2.22.0 Dec 16 12:44:39.964935 ignition[1433]: INFO : Stage: files Dec 16 12:44:39.964935 ignition[1433]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:39.964935 ignition[1433]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:39.964935 ignition[1433]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:44:39.983296 ignition[1433]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:44:39.983296 ignition[1433]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:44:40.044887 ignition[1433]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:44:40.050310 ignition[1433]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:44:40.050310 ignition[1433]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:44:40.045306 unknown[1433]: wrote ssh authorized keys file for user: core Dec 16 12:44:40.093090 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:44:40.100944 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:44:40.262039 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:44:40.381148 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:44:40.389603 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:44:40.389603 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:44:40.389603 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:44:40.389603 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:44:40.389603 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:44:40.389603 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:44:40.389603 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:44:40.389603 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:44:40.450018 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:44:40.450018 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:44:40.450018 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:44:40.450018 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:44:40.450018 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:44:40.450018 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:44:40.985208 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:44:41.179467 ignition[1433]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:44:41.179467 ignition[1433]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:44:41.216473 ignition[1433]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:44:41.228988 ignition[1433]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:44:41.228988 ignition[1433]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:44:41.228988 ignition[1433]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:44:41.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.263103 ignition[1433]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:44:41.263103 ignition[1433]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:44:41.263103 ignition[1433]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:44:41.263103 ignition[1433]: INFO : files: files passed Dec 16 12:44:41.263103 ignition[1433]: INFO : Ignition finished successfully Dec 16 12:44:41.297262 kernel: audit: type=1130 audit(1765889081.246:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.239450 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:44:41.263432 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:44:41.292351 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:44:41.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.305772 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:44:41.346388 kernel: audit: type=1130 audit(1765889081.314:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.346410 kernel: audit: type=1131 audit(1765889081.314:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.305934 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:44:41.372501 initrd-setup-root-after-ignition[1465]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:41.372501 initrd-setup-root-after-ignition[1465]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:41.385315 initrd-setup-root-after-ignition[1469]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:44:41.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.380248 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:44:41.424500 kernel: audit: type=1130 audit(1765889081.390:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.390696 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:44:41.413573 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:44:41.463827 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:44:41.463934 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:44:41.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.488677 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:44:41.517896 kernel: audit: type=1130 audit(1765889081.472:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.517925 kernel: audit: type=1131 audit(1765889081.487:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.504507 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:44:41.508823 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:44:41.509715 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:44:41.543109 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:44:41.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.549976 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:44:41.574163 kernel: audit: type=1130 audit(1765889081.548:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.585210 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:44:41.590323 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:41.595787 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:41.605573 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:44:41.614992 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:44:41.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.615175 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:44:41.627330 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:44:41.631745 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:44:41.640417 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:44:41.648859 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:44:41.659759 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:44:41.669417 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:44:41.679991 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:44:41.689530 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:44:41.699084 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:44:41.708094 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:44:41.717839 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:44:41.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.725833 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:44:41.725958 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:44:41.737337 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:41.742045 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:41.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.751285 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:44:41.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.755130 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:41.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.760336 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:44:41.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.760448 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:44:41.774503 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:44:41.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.774608 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:44:41.779768 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:44:41.779838 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:44:41.788060 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:44:41.855470 ignition[1489]: INFO : Ignition 2.22.0 Dec 16 12:44:41.855470 ignition[1489]: INFO : Stage: umount Dec 16 12:44:41.855470 ignition[1489]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:44:41.855470 ignition[1489]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:44:41.855470 ignition[1489]: INFO : umount: umount passed Dec 16 12:44:41.855470 ignition[1489]: INFO : Ignition finished successfully Dec 16 12:44:41.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.788156 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:44:41.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.799499 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:44:41.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.809383 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:44:41.809545 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:41.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.834296 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:44:41.850032 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:44:41.850250 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:41.858365 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:44:41.858469 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:41.864960 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:44:41.865167 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:44:41.881487 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:44:41.881604 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:44:41.892658 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:44:42.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.892782 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:44:42.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.899760 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:44:41.899852 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:44:41.908566 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:44:41.908641 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:44:41.917603 systemd[1]: Stopped target network.target - Network. Dec 16 12:44:42.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.926377 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:44:41.926468 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:44:42.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.934672 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:44:41.943162 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:44:42.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.093000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:44:42.096000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:44:41.949130 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:41.962031 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:44:41.971313 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:44:41.983843 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:44:42.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.983915 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:44:42.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.992348 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:44:42.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:41.992397 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:44:42.001628 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:44:42.001656 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:42.009734 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:44:42.009790 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:44:42.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.019379 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:44:42.019421 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:44:42.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.027123 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:44:42.035511 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:44:42.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.053327 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:44:42.054016 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:44:42.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.054137 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:44:42.066757 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:44:42.066866 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:44:42.275392 kernel: hv_netvsc 000d3ac6-13bc-000d-3ac6-13bc000d3ac6 eth0: Data path switched from VF: enP38450s1 Dec 16 12:44:42.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.085957 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:44:42.087119 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:44:42.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.095355 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:44:42.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.101777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:44:42.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.101834 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:42.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.114414 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:44:42.124093 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:44:42.124192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:44:42.133229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:44:42.133293 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:42.140159 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:44:42.140202 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:42.148619 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:42.169488 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:44:42.169616 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:42.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:42.185215 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:44:42.185300 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:42.193856 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:44:42.193902 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:42.198341 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:44:42.198400 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:44:42.211487 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:44:42.211553 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:44:42.226671 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:44:42.226755 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:44:42.240163 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:44:42.253805 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:44:42.253902 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:42.264235 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:44:42.264291 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:42.270681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:42.270737 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:42.286728 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:44:42.288118 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:44:42.294784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:44:42.294866 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:44:42.304500 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:44:42.304635 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:44:42.353266 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:44:42.353405 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:44:42.360933 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:44:42.372004 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:44:42.400299 systemd[1]: Switching root. Dec 16 12:44:42.709671 systemd-journald[570]: Journal stopped Dec 16 12:44:46.952412 systemd-journald[570]: Received SIGTERM from PID 1 (systemd). Dec 16 12:44:46.952448 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:44:46.952457 kernel: SELinux: policy capability open_perms=1 Dec 16 12:44:46.952467 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:44:46.952474 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:44:46.952480 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:44:46.952486 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:44:46.952492 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:44:46.952498 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:44:46.952505 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:44:46.952512 systemd[1]: Successfully loaded SELinux policy in 152.003ms. Dec 16 12:44:46.952519 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.802ms. Dec 16 12:44:46.952526 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:44:46.952533 systemd[1]: Detected virtualization microsoft. Dec 16 12:44:46.952541 systemd[1]: Detected architecture arm64. Dec 16 12:44:46.952548 systemd[1]: Detected first boot. Dec 16 12:44:46.952555 systemd[1]: Hostname set to . Dec 16 12:44:46.952561 systemd[1]: Initializing machine ID from random generator. Dec 16 12:44:46.952567 zram_generator::config[1533]: No configuration found. Dec 16 12:44:46.952576 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:44:46.952582 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:44:46.952588 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 16 12:44:46.952594 kernel: audit: type=1334 audit(1765889086.042:95): prog-id=12 op=LOAD Dec 16 12:44:46.952601 kernel: audit: type=1334 audit(1765889086.045:96): prog-id=3 op=UNLOAD Dec 16 12:44:46.952607 kernel: audit: type=1334 audit(1765889086.050:97): prog-id=13 op=LOAD Dec 16 12:44:46.952615 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:44:46.952621 kernel: audit: type=1334 audit(1765889086.050:98): prog-id=14 op=LOAD Dec 16 12:44:46.952628 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:44:46.952634 kernel: audit: type=1334 audit(1765889086.050:99): prog-id=4 op=UNLOAD Dec 16 12:44:46.952640 kernel: audit: type=1334 audit(1765889086.050:100): prog-id=5 op=UNLOAD Dec 16 12:44:46.952647 kernel: audit: type=1131 audit(1765889086.050:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.952654 kernel: audit: type=1334 audit(1765889086.078:102): prog-id=12 op=UNLOAD Dec 16 12:44:46.952661 kernel: audit: type=1130 audit(1765889086.098:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.952667 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:44:46.952674 kernel: audit: type=1131 audit(1765889086.098:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.952681 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:44:46.952687 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:44:46.952695 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:44:46.952701 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:44:46.952708 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:44:46.952716 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:44:46.952723 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:44:46.952730 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:44:46.952738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:44:46.952745 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:44:46.952752 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:44:46.952758 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:44:46.952765 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:44:46.952772 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:44:46.952779 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:44:46.952786 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:44:46.952793 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:44:46.952799 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:44:46.952806 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:44:46.952813 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:44:46.952821 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:44:46.952828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:44:46.952834 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:44:46.952842 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:44:46.952849 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:44:46.952856 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:44:46.952862 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:44:46.952870 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:44:46.952877 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:44:46.952883 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:44:46.952890 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:44:46.952898 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:44:46.952905 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:44:46.952912 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:44:46.952918 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:44:46.952925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:44:46.952932 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:44:46.952940 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:44:46.952946 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:44:46.952953 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:44:46.952960 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:44:46.952967 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:44:46.952974 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:44:46.952981 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:44:46.952989 systemd[1]: Reached target machines.target - Containers. Dec 16 12:44:46.952995 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:44:46.953002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:46.953009 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:44:46.953016 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:44:46.953023 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:46.953030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:44:46.953038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:46.953045 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:44:46.953051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:46.953058 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:44:46.953065 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:44:46.953072 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:44:46.953078 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:44:46.953097 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:44:46.953105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:46.953112 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:44:46.953119 kernel: fuse: init (API version 7.41) Dec 16 12:44:46.953126 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:44:46.953132 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:44:46.953140 kernel: ACPI: bus type drm_connector registered Dec 16 12:44:46.953146 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:44:46.953153 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:44:46.953160 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:44:46.953167 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:44:46.953174 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:44:46.953181 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:44:46.953189 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:44:46.953196 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:44:46.953205 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:44:46.953212 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:44:46.953219 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:44:46.953225 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:44:46.953263 systemd-journald[1624]: Collecting audit messages is enabled. Dec 16 12:44:46.953282 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:44:46.953290 systemd-journald[1624]: Journal started Dec 16 12:44:46.953307 systemd-journald[1624]: Runtime Journal (/run/log/journal/94b686ee1fc4445f8d2b3e11f3044c58) is 8M, max 78.3M, 70.3M free. Dec 16 12:44:46.410000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:44:46.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.751000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:44:46.751000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:44:46.752000 audit: BPF prog-id=15 op=LOAD Dec 16 12:44:46.752000 audit: BPF prog-id=16 op=LOAD Dec 16 12:44:46.752000 audit: BPF prog-id=17 op=LOAD Dec 16 12:44:46.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.945000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:44:46.945000 audit[1624]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd553c1a0 a2=4000 a3=0 items=0 ppid=1 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:44:46.945000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:44:46.035966 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:44:46.051155 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:44:46.051649 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:44:46.054074 systemd[1]: systemd-journald.service: Consumed 2.488s CPU time. Dec 16 12:44:46.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.965114 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:44:46.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.966229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:46.966427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:46.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.971602 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:44:46.971751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:44:46.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.976564 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:46.976705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:46.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.984729 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:44:46.984878 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:44:46.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.989775 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:46.989923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:46.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:46.994997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:44:46.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:47.000473 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:44:47.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:47.006954 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:44:47.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:47.013850 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:44:47.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:47.019668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:44:47.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:47.034028 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:44:47.039468 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:44:47.046232 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:44:47.062000 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:44:47.067791 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:44:47.067923 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:44:47.073628 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:44:47.079662 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:47.079872 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:47.081549 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:44:47.094186 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:44:47.099189 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:44:47.100469 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:44:47.105134 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:44:47.106109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:44:47.113177 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:44:47.119201 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:44:47.124987 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:44:47.129909 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:44:47.316122 kernel: loop1: detected capacity change from 0 to 207008 Dec 16 12:44:47.324889 systemd-journald[1624]: Time spent on flushing to /var/log/journal/94b686ee1fc4445f8d2b3e11f3044c58 is 10.440ms for 1083 entries. Dec 16 12:44:47.324889 systemd-journald[1624]: System Journal (/var/log/journal/94b686ee1fc4445f8d2b3e11f3044c58) is 8M, max 2.2G, 2.2G free. Dec 16 12:44:48.868508 systemd-journald[1624]: Received client request to flush runtime journal. Dec 16 12:44:48.868590 kernel: loop2: detected capacity change from 0 to 109872 Dec 16 12:44:47.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:47.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:47.388512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:44:47.468398 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:44:47.474002 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:44:47.480688 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:44:48.870263 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:44:48.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:50.266639 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:44:50.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:50.272138 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:44:50.274135 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:44:50.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:50.280000 audit: BPF prog-id=18 op=LOAD Dec 16 12:44:50.280000 audit: BPF prog-id=19 op=LOAD Dec 16 12:44:50.280000 audit: BPF prog-id=20 op=LOAD Dec 16 12:44:50.282434 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:44:50.286000 audit: BPF prog-id=21 op=LOAD Dec 16 12:44:50.288620 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:44:50.293770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:44:50.689000 audit: BPF prog-id=22 op=LOAD Dec 16 12:44:50.689000 audit: BPF prog-id=23 op=LOAD Dec 16 12:44:50.689000 audit: BPF prog-id=24 op=LOAD Dec 16 12:44:50.691641 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:44:50.698000 audit: BPF prog-id=25 op=LOAD Dec 16 12:44:50.698000 audit: BPF prog-id=26 op=LOAD Dec 16 12:44:50.698000 audit: BPF prog-id=27 op=LOAD Dec 16 12:44:50.700350 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:44:50.710107 kernel: loop3: detected capacity change from 0 to 100192 Dec 16 12:44:50.741894 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:44:50.743512 systemd-nsresourced[1695]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:44:50.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:50.746770 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:44:50.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:50.780528 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Dec 16 12:44:50.780542 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Dec 16 12:44:50.790275 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:44:50.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:50.841168 systemd-oomd[1689]: No swap; memory pressure usage will be degraded Dec 16 12:44:50.841837 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:44:50.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:50.883817 systemd-resolved[1690]: Positive Trust Anchors: Dec 16 12:44:50.883839 systemd-resolved[1690]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:44:50.883842 systemd-resolved[1690]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:44:50.883861 systemd-resolved[1690]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:44:51.681126 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:44:51.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:51.695185 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 16 12:44:51.695278 kernel: audit: type=1130 audit(1765889091.685:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:51.693298 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:44:51.714621 kernel: audit: type=1334 audit(1765889091.685:154): prog-id=8 op=UNLOAD Dec 16 12:44:51.714748 kernel: audit: type=1334 audit(1765889091.685:155): prog-id=7 op=UNLOAD Dec 16 12:44:51.685000 audit: BPF prog-id=8 op=UNLOAD Dec 16 12:44:51.685000 audit: BPF prog-id=7 op=UNLOAD Dec 16 12:44:51.689000 audit: BPF prog-id=28 op=LOAD Dec 16 12:44:51.723031 kernel: audit: type=1334 audit(1765889091.689:156): prog-id=28 op=LOAD Dec 16 12:44:51.690000 audit: BPF prog-id=29 op=LOAD Dec 16 12:44:51.726981 kernel: audit: type=1334 audit(1765889091.690:157): prog-id=29 op=LOAD Dec 16 12:44:51.727043 systemd-resolved[1690]: Using system hostname 'ci-4515.1.0-a-a4975b77c5'. Dec 16 12:44:51.728641 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:44:51.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:51.735386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:44:51.752110 kernel: audit: type=1130 audit(1765889091.733:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:51.765284 systemd-udevd[1712]: Using default interface naming scheme 'v257'. Dec 16 12:44:52.234115 kernel: loop4: detected capacity change from 0 to 27736 Dec 16 12:44:52.402810 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:44:52.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:52.421000 audit: BPF prog-id=30 op=LOAD Dec 16 12:44:52.428551 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:44:52.430868 kernel: audit: type=1130 audit(1765889092.409:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:52.430943 kernel: audit: type=1334 audit(1765889092.421:160): prog-id=30 op=LOAD Dec 16 12:44:52.466757 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:44:52.525116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#17 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:44:52.554614 systemd-networkd[1724]: lo: Link UP Dec 16 12:44:52.554953 systemd-networkd[1724]: lo: Gained carrier Dec 16 12:44:52.556909 systemd-networkd[1724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:52.557064 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:44:52.561668 systemd-networkd[1724]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:44:52.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:52.563686 systemd[1]: Reached target network.target - Network. Dec 16 12:44:52.580949 kernel: audit: type=1130 audit(1765889092.562:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:52.583234 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:44:52.593264 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:44:52.641110 kernel: mlx5_core 9632:00:02.0 enP38450s1: Link up Dec 16 12:44:52.666409 kernel: hv_netvsc 000d3ac6-13bc-000d-3ac6-13bc000d3ac6 eth0: Data path switched to VF: enP38450s1 Dec 16 12:44:52.666035 systemd-networkd[1724]: enP38450s1: Link UP Dec 16 12:44:52.666183 systemd-networkd[1724]: eth0: Link UP Dec 16 12:44:52.666186 systemd-networkd[1724]: eth0: Gained carrier Dec 16 12:44:52.666203 systemd-networkd[1724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:44:52.673592 systemd-networkd[1724]: enP38450s1: Gained carrier Dec 16 12:44:52.680206 systemd-networkd[1724]: eth0: DHCPv4 address 10.200.20.49/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:44:52.703114 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:44:52.979185 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:44:52.986299 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:44:52.986380 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:44:53.041135 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:44:53.049711 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:44:53.049824 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:44:53.053758 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:44:53.059762 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:44:53.100285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:53.107820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:44:53.108034 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:53.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:53.116296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:44:53.130697 kernel: audit: type=1130 audit(1765889093.112:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:53.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:53.137127 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:44:53.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:53.290105 kernel: loop5: detected capacity change from 0 to 207008 Dec 16 12:44:53.310171 kernel: loop6: detected capacity change from 0 to 109872 Dec 16 12:44:53.689155 kernel: MACsec IEEE 802.1AE Dec 16 12:44:53.697681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:44:53.703858 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:44:53.770202 kernel: loop7: detected capacity change from 0 to 100192 Dec 16 12:44:53.828449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:44:53.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:53.878163 kernel: loop1: detected capacity change from 0 to 27736 Dec 16 12:44:54.222601 (sd-merge)[1797]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 12:44:54.226041 (sd-merge)[1797]: Merged extensions into '/usr'. Dec 16 12:44:54.229852 systemd[1]: Reload requested from client PID 1673 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:44:54.230166 systemd[1]: Reloading... Dec 16 12:44:54.299110 zram_generator::config[1882]: No configuration found. Dec 16 12:44:54.474743 systemd[1]: Reloading finished in 244 ms. Dec 16 12:44:54.499163 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:44:54.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.511298 systemd[1]: Starting ensure-sysext.service... Dec 16 12:44:54.516267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:44:54.521000 audit: BPF prog-id=31 op=LOAD Dec 16 12:44:54.521000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:44:54.521000 audit: BPF prog-id=32 op=LOAD Dec 16 12:44:54.521000 audit: BPF prog-id=33 op=LOAD Dec 16 12:44:54.521000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:44:54.521000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:44:54.522000 audit: BPF prog-id=34 op=LOAD Dec 16 12:44:54.522000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:44:54.523000 audit: BPF prog-id=35 op=LOAD Dec 16 12:44:54.523000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:44:54.523000 audit: BPF prog-id=36 op=LOAD Dec 16 12:44:54.523000 audit: BPF prog-id=37 op=LOAD Dec 16 12:44:54.523000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:44:54.523000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:44:54.524000 audit: BPF prog-id=38 op=LOAD Dec 16 12:44:54.524000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:44:54.524000 audit: BPF prog-id=39 op=LOAD Dec 16 12:44:54.524000 audit: BPF prog-id=40 op=LOAD Dec 16 12:44:54.524000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:44:54.524000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:44:54.525000 audit: BPF prog-id=41 op=LOAD Dec 16 12:44:54.525000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:44:54.525000 audit: BPF prog-id=42 op=LOAD Dec 16 12:44:54.525000 audit: BPF prog-id=43 op=LOAD Dec 16 12:44:54.525000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:44:54.525000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:44:54.526000 audit: BPF prog-id=44 op=LOAD Dec 16 12:44:54.526000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:44:54.526000 audit: BPF prog-id=45 op=LOAD Dec 16 12:44:54.526000 audit: BPF prog-id=46 op=LOAD Dec 16 12:44:54.526000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:44:54.526000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:44:54.532998 systemd[1]: Reload requested from client PID 1931 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:44:54.533162 systemd[1]: Reloading... Dec 16 12:44:54.603122 zram_generator::config[1985]: No configuration found. Dec 16 12:44:54.606272 systemd-networkd[1724]: eth0: Gained IPv6LL Dec 16 12:44:54.738185 systemd[1]: Reloading finished in 204 ms. Dec 16 12:44:54.755425 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:44:54.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.760000 audit: BPF prog-id=47 op=LOAD Dec 16 12:44:54.760000 audit: BPF prog-id=48 op=LOAD Dec 16 12:44:54.760000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:44:54.760000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:44:54.761000 audit: BPF prog-id=49 op=LOAD Dec 16 12:44:54.761000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:44:54.761000 audit: BPF prog-id=50 op=LOAD Dec 16 12:44:54.761000 audit: BPF prog-id=51 op=LOAD Dec 16 12:44:54.761000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:44:54.761000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:44:54.761000 audit: BPF prog-id=52 op=LOAD Dec 16 12:44:54.761000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:44:54.761000 audit: BPF prog-id=53 op=LOAD Dec 16 12:44:54.761000 audit: BPF prog-id=54 op=LOAD Dec 16 12:44:54.761000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:44:54.761000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:44:54.761000 audit: BPF prog-id=55 op=LOAD Dec 16 12:44:54.762000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:44:54.762000 audit: BPF prog-id=56 op=LOAD Dec 16 12:44:54.762000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:44:54.762000 audit: BPF prog-id=57 op=LOAD Dec 16 12:44:54.762000 audit: BPF prog-id=58 op=LOAD Dec 16 12:44:54.762000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:44:54.762000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:44:54.762000 audit: BPF prog-id=59 op=LOAD Dec 16 12:44:54.762000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:44:54.764000 audit: BPF prog-id=60 op=LOAD Dec 16 12:44:54.764000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:44:54.764000 audit: BPF prog-id=61 op=LOAD Dec 16 12:44:54.764000 audit: BPF prog-id=62 op=LOAD Dec 16 12:44:54.764000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:44:54.764000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:44:54.775903 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:44:54.781260 systemd-tmpfiles[1932]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:44:54.781281 systemd-tmpfiles[1932]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:44:54.781531 systemd-tmpfiles[1932]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:44:54.781618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:54.782221 systemd-tmpfiles[1932]: ACLs are not supported, ignoring. Dec 16 12:44:54.782265 systemd-tmpfiles[1932]: ACLs are not supported, ignoring. Dec 16 12:44:54.783332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:54.795047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:54.804824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:54.809246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:54.809419 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:54.809491 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:54.810287 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:54.810509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:54.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.816565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:54.818256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:54.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.824050 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:54.824290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:54.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.833280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:54.834154 systemd-tmpfiles[1932]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:44:54.834161 systemd-tmpfiles[1932]: Skipping /boot Dec 16 12:44:54.836347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:54.839888 systemd-tmpfiles[1932]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:44:54.840005 systemd-tmpfiles[1932]: Skipping /boot Dec 16 12:44:54.865325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:54.873973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:54.878983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:54.879210 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:54.879506 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:54.882250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:44:54.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.889268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:54.889477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:54.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.894587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:54.894767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:54.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.899940 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:54.902112 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:54.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:54.914682 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:44:54.922353 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:44:54.930468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:44:54.931828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:44:54.943354 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:44:54.948355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:44:54.955979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:44:54.961441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:44:54.962209 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:44:54.965350 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:44:54.970580 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:44:54.973324 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:44:54.978510 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:44:54.989365 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:44:55.000367 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:44:55.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.007626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:44:55.007857 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:44:55.007000 audit[2053]: SYSTEM_BOOT pid=2053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.013068 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:44:55.013278 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:44:55.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.018229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:44:55.018450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:44:55.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.024706 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:44:55.024888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:44:55.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.035167 systemd[1]: Finished ensure-sysext.service. Dec 16 12:44:55.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.042690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:44:55.042854 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:44:55.043964 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:44:55.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:55.564635 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:44:55.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:44:56.176000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:44:56.176000 audit[2075]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd2046da0 a2=420 a3=0 items=0 ppid=2038 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:44:56.176000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:44:56.178285 augenrules[2075]: No rules Dec 16 12:44:56.179375 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:44:56.179653 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:44:57.528975 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:44:57.534473 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:45:05.031740 ldconfig[2046]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:45:05.049618 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:45:05.058885 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:45:05.094577 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:45:05.099561 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:45:05.103865 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:45:05.108565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:45:05.113688 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:45:05.118307 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:45:05.123124 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:45:05.127968 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:45:05.132551 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:45:05.137503 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:45:05.137538 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:45:05.141050 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:45:05.170439 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:45:05.176355 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:45:05.181866 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:45:05.186905 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:45:05.191683 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:45:05.197988 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:45:05.202691 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:45:05.208034 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:45:05.213409 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:45:05.217186 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:45:05.220849 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:45:05.220873 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:45:05.223440 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:45:05.239093 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:45:05.244623 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:45:05.251270 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:45:05.259821 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:45:05.268736 chronyd[2087]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:45:05.270508 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:45:05.276977 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:45:05.280889 jq[2092]: false Dec 16 12:45:05.281106 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:45:05.284337 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:45:05.289882 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:45:05.291304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:05.291560 KVP[2097]: KVP starting; pid is:2097 Dec 16 12:45:05.298131 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:45:05.298119 KVP[2097]: KVP LIC Version: 3.1 Dec 16 12:45:05.299348 chronyd[2087]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:45:05.299530 chronyd[2087]: Loaded seccomp filter (level 2) Dec 16 12:45:05.299868 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:45:05.307174 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:45:05.317476 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:45:05.327316 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:45:05.333229 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:45:05.346112 extend-filesystems[2096]: Found /dev/sda6 Dec 16 12:45:05.343249 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:45:05.347583 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:45:05.348503 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:45:05.352409 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:45:05.366471 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:45:05.380112 extend-filesystems[2096]: Found /dev/sda9 Dec 16 12:45:05.386438 jq[2118]: true Dec 16 12:45:05.376592 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:45:05.386763 extend-filesystems[2096]: Checking size of /dev/sda9 Dec 16 12:45:05.384619 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:45:05.394008 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:45:05.398432 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:45:05.400759 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:45:05.401240 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:45:05.409770 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:45:05.418752 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:45:05.420191 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:45:05.426268 extend-filesystems[2096]: Resized partition /dev/sda9 Dec 16 12:45:05.458108 update_engine[2110]: I20251216 12:45:05.456177 2110 main.cc:92] Flatcar Update Engine starting Dec 16 12:45:05.461646 jq[2140]: true Dec 16 12:45:05.463304 extend-filesystems[2150]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:45:05.490300 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 12:45:05.490397 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Dec 16 12:45:05.501051 tar[2137]: linux-arm64/LICENSE Dec 16 12:45:05.501051 tar[2137]: linux-arm64/helm Dec 16 12:45:05.508410 systemd-logind[2108]: New seat seat0. Dec 16 12:45:05.522991 systemd-logind[2108]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:45:05.523330 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:45:05.540401 extend-filesystems[2150]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 12:45:05.540401 extend-filesystems[2150]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 12:45:05.540401 extend-filesystems[2150]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Dec 16 12:45:05.587472 extend-filesystems[2096]: Resized filesystem in /dev/sda9 Dec 16 12:45:05.546300 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:45:05.548136 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:45:05.604737 bash[2188]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:45:05.607115 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:45:05.614924 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:45:05.627605 dbus-daemon[2090]: [system] SELinux support is enabled Dec 16 12:45:05.627855 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:45:05.636567 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:45:05.641818 update_engine[2110]: I20251216 12:45:05.640341 2110 update_check_scheduler.cc:74] Next update check in 4m35s Dec 16 12:45:05.636616 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:45:05.644978 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:45:05.645006 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:45:05.653747 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:45:05.653934 dbus-daemon[2090]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:45:05.667898 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:45:05.716763 coreos-metadata[2089]: Dec 16 12:45:05.715 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:45:05.723141 coreos-metadata[2089]: Dec 16 12:45:05.721 INFO Fetch successful Dec 16 12:45:05.723141 coreos-metadata[2089]: Dec 16 12:45:05.721 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:45:05.726869 coreos-metadata[2089]: Dec 16 12:45:05.726 INFO Fetch successful Dec 16 12:45:05.727371 coreos-metadata[2089]: Dec 16 12:45:05.727 INFO Fetching http://168.63.129.16/machine/8ea2f935-307a-4d1e-abf9-cf8707fff9bd/d34dbe2c%2D4f79%2D43f0%2Da808%2Db810e5f29b54.%5Fci%2D4515.1.0%2Da%2Da4975b77c5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:45:05.730718 coreos-metadata[2089]: Dec 16 12:45:05.730 INFO Fetch successful Dec 16 12:45:05.731446 coreos-metadata[2089]: Dec 16 12:45:05.730 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:45:05.743242 coreos-metadata[2089]: Dec 16 12:45:05.743 INFO Fetch successful Dec 16 12:45:05.814437 sshd_keygen[2117]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:45:05.847769 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:45:05.855164 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:45:05.913680 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:45:05.926967 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:45:05.939835 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:45:05.977460 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:45:05.979269 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:45:05.988572 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:45:05.997357 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:45:06.011835 containerd[2141]: time="2025-12-16T12:45:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:45:06.013312 containerd[2141]: time="2025-12-16T12:45:06.012439384Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:45:06.025086 containerd[2141]: time="2025-12-16T12:45:06.024951368Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.52µs" Dec 16 12:45:06.025086 containerd[2141]: time="2025-12-16T12:45:06.024994992Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:45:06.025086 containerd[2141]: time="2025-12-16T12:45:06.025037632Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:45:06.025086 containerd[2141]: time="2025-12-16T12:45:06.025046784Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025221496Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025239776Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025297136Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025304592Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025489552Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025501344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025508848Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025513976Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025631504Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025639192Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025688616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:45:06.030336 containerd[2141]: time="2025-12-16T12:45:06.025815536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:45:06.025299 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:45:06.032539 containerd[2141]: time="2025-12-16T12:45:06.025833472Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:45:06.032539 containerd[2141]: time="2025-12-16T12:45:06.025838912Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:45:06.032539 containerd[2141]: time="2025-12-16T12:45:06.025894976Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:45:06.032539 containerd[2141]: time="2025-12-16T12:45:06.026095480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:45:06.032539 containerd[2141]: time="2025-12-16T12:45:06.026166464Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:45:06.034488 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:45:06.044578 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:45:06.050755 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:45:06.054840 containerd[2141]: time="2025-12-16T12:45:06.054657376Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:45:06.054840 containerd[2141]: time="2025-12-16T12:45:06.054734648Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054874328Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054887456Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054897968Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054906144Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054913928Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054921720Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054929832Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054938104Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054945104Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:45:06.054952 containerd[2141]: time="2025-12-16T12:45:06.054953024Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:45:06.055071 containerd[2141]: time="2025-12-16T12:45:06.054961488Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:45:06.055071 containerd[2141]: time="2025-12-16T12:45:06.054974312Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055163760Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055187240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055197296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055203776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055211400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055218560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055226136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055233760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055241104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055248408Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055254448Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055318952Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055362984Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055383048Z" level=info msg="Start snapshots syncer" Dec 16 12:45:06.055467 containerd[2141]: time="2025-12-16T12:45:06.055401552Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:45:06.058114 containerd[2141]: time="2025-12-16T12:45:06.055635952Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:45:06.058114 containerd[2141]: time="2025-12-16T12:45:06.055672368Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055708640Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055809496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055824832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055831456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055839416Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055847432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055854240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055860656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055867040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055874856Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055895760Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055905512Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:45:06.058203 containerd[2141]: time="2025-12-16T12:45:06.055910304Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055915664Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055921224Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055926984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055933552Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055942008Z" level=info msg="runtime interface created" Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055945288Z" level=info msg="created NRI interface" Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055949976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055957424Z" level=info msg="Connect containerd service" Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.055979232Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:45:06.058350 containerd[2141]: time="2025-12-16T12:45:06.057076408Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:45:06.081355 locksmithd[2198]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:45:06.145915 tar[2137]: linux-arm64/README.md Dec 16 12:45:06.163907 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:45:06.379115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:06.484880 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:06.587007 containerd[2141]: time="2025-12-16T12:45:06.586960936Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:45:06.587163 containerd[2141]: time="2025-12-16T12:45:06.587027920Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:45:06.587163 containerd[2141]: time="2025-12-16T12:45:06.587044320Z" level=info msg="Start subscribing containerd event" Dec 16 12:45:06.587163 containerd[2141]: time="2025-12-16T12:45:06.587104936Z" level=info msg="Start recovering state" Dec 16 12:45:06.587211 containerd[2141]: time="2025-12-16T12:45:06.587181896Z" level=info msg="Start event monitor" Dec 16 12:45:06.587211 containerd[2141]: time="2025-12-16T12:45:06.587191536Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:45:06.587211 containerd[2141]: time="2025-12-16T12:45:06.587197400Z" level=info msg="Start streaming server" Dec 16 12:45:06.587211 containerd[2141]: time="2025-12-16T12:45:06.587203240Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:45:06.587211 containerd[2141]: time="2025-12-16T12:45:06.587208224Z" level=info msg="runtime interface starting up..." Dec 16 12:45:06.587278 containerd[2141]: time="2025-12-16T12:45:06.587212760Z" level=info msg="starting plugins..." Dec 16 12:45:06.587278 containerd[2141]: time="2025-12-16T12:45:06.587223928Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:45:06.588201 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:45:06.590861 containerd[2141]: time="2025-12-16T12:45:06.590818184Z" level=info msg="containerd successfully booted in 0.579732s" Dec 16 12:45:06.596342 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:45:06.603007 systemd[1]: Startup finished in 2.751s (kernel) + 11.549s (initrd) + 23.262s (userspace) = 37.563s. Dec 16 12:45:06.853446 kubelet[2304]: E1216 12:45:06.853387 2304 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:06.855650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:06.855768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:06.857199 systemd[1]: kubelet.service: Consumed 568ms CPU time, 256.6M memory peak. Dec 16 12:45:06.923756 login[2284]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:06.929686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:45:06.930740 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:45:06.936120 systemd-logind[2108]: New session 1 of user core. Dec 16 12:45:06.951113 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:45:06.955867 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:45:06.970764 (systemd)[2322]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:45:06.987547 systemd-logind[2108]: New session c1 of user core. Dec 16 12:45:06.999003 login[2285]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:07.004047 systemd-logind[2108]: New session 2 of user core. Dec 16 12:45:07.126785 systemd[2322]: Queued start job for default target default.target. Dec 16 12:45:07.138596 systemd[2322]: Created slice app.slice - User Application Slice. Dec 16 12:45:07.138782 systemd[2322]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:45:07.138841 systemd[2322]: Reached target paths.target - Paths. Dec 16 12:45:07.139043 systemd[2322]: Reached target timers.target - Timers. Dec 16 12:45:07.140382 systemd[2322]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:45:07.141470 systemd[2322]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:45:07.152184 systemd[2322]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:45:07.152443 systemd[2322]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:45:07.152544 systemd[2322]: Reached target sockets.target - Sockets. Dec 16 12:45:07.152592 systemd[2322]: Reached target basic.target - Basic System. Dec 16 12:45:07.152614 systemd[2322]: Reached target default.target - Main User Target. Dec 16 12:45:07.152637 systemd[2322]: Startup finished in 157ms. Dec 16 12:45:07.153139 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:45:07.162518 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:45:07.163495 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:45:07.858775 waagent[2278]: 2025-12-16T12:45:07.858698Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:45:07.862927 waagent[2278]: 2025-12-16T12:45:07.862870Z INFO Daemon Daemon OS: flatcar 4515.1.0 Dec 16 12:45:07.866410 waagent[2278]: 2025-12-16T12:45:07.866367Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:45:07.869679 waagent[2278]: 2025-12-16T12:45:07.869633Z INFO Daemon Daemon Run daemon Dec 16 12:45:07.874087 waagent[2278]: 2025-12-16T12:45:07.872476Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Dec 16 12:45:07.878850 waagent[2278]: 2025-12-16T12:45:07.878797Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:45:07.883205 waagent[2278]: 2025-12-16T12:45:07.883163Z INFO Daemon Daemon Activate resource disk Dec 16 12:45:07.886585 waagent[2278]: 2025-12-16T12:45:07.886548Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:45:07.894623 waagent[2278]: 2025-12-16T12:45:07.894571Z INFO Daemon Daemon Found device: None Dec 16 12:45:07.897895 waagent[2278]: 2025-12-16T12:45:07.897855Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:45:07.903754 waagent[2278]: 2025-12-16T12:45:07.903718Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:45:07.912096 waagent[2278]: 2025-12-16T12:45:07.912050Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:45:07.916825 waagent[2278]: 2025-12-16T12:45:07.916789Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:45:07.926539 waagent[2278]: 2025-12-16T12:45:07.926068Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:45:07.936033 waagent[2278]: 2025-12-16T12:45:07.935977Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:45:07.942904 waagent[2278]: 2025-12-16T12:45:07.942855Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:45:07.947811 waagent[2278]: 2025-12-16T12:45:07.947772Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:45:07.995970 waagent[2278]: 2025-12-16T12:45:07.995878Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:45:08.024235 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:45:08.026163 waagent[2278]: 2025-12-16T12:45:08.026006Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:45:08.029667 waagent[2278]: 2025-12-16T12:45:08.029612Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:45:08.033790 waagent[2278]: 2025-12-16T12:45:08.033746Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:45:08.038461 waagent[2278]: 2025-12-16T12:45:08.038421Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:45:08.042447 waagent[2278]: 2025-12-16T12:45:08.042407Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:45:08.045878 waagent[2278]: 2025-12-16T12:45:08.045843Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:45:08.073271 waagent[2278]: 2025-12-16T12:45:08.073228Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:45:08.077893 waagent[2278]: 2025-12-16T12:45:08.077867Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:45:08.081619 waagent[2278]: 2025-12-16T12:45:08.081588Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:45:08.196993 waagent[2278]: 2025-12-16T12:45:08.196763Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:45:08.201860 waagent[2278]: 2025-12-16T12:45:08.201804Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:45:08.209341 waagent[2278]: 2025-12-16T12:45:08.209298Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:45:08.227785 waagent[2278]: 2025-12-16T12:45:08.227748Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:45:08.232172 waagent[2278]: 2025-12-16T12:45:08.232133Z INFO Daemon Dec 16 12:45:08.234146 waagent[2278]: 2025-12-16T12:45:08.234113Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a3bb8f9d-d7c3-4782-b507-afc6437db055 eTag: 17016985074182774708 source: Fabric] Dec 16 12:45:08.242077 waagent[2278]: 2025-12-16T12:45:08.242041Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:45:08.246826 waagent[2278]: 2025-12-16T12:45:08.246795Z INFO Daemon Dec 16 12:45:08.248798 waagent[2278]: 2025-12-16T12:45:08.248769Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:45:08.259403 waagent[2278]: 2025-12-16T12:45:08.259372Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:45:08.323590 waagent[2278]: 2025-12-16T12:45:08.323512Z INFO Daemon Downloaded certificate {'thumbprint': '3D6838DFCA9B01FCB302448FF99931878C1509D4', 'hasPrivateKey': True} Dec 16 12:45:08.331051 waagent[2278]: 2025-12-16T12:45:08.331010Z INFO Daemon Fetch goal state completed Dec 16 12:45:08.341340 waagent[2278]: 2025-12-16T12:45:08.341301Z INFO Daemon Daemon Starting provisioning Dec 16 12:45:08.344944 waagent[2278]: 2025-12-16T12:45:08.344901Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:45:08.348228 waagent[2278]: 2025-12-16T12:45:08.348198Z INFO Daemon Daemon Set hostname [ci-4515.1.0-a-a4975b77c5] Dec 16 12:45:08.375124 waagent[2278]: 2025-12-16T12:45:08.375034Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-a-a4975b77c5] Dec 16 12:45:08.379768 waagent[2278]: 2025-12-16T12:45:08.379710Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:45:08.384285 waagent[2278]: 2025-12-16T12:45:08.384242Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:45:08.394755 systemd-networkd[1724]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:45:08.394763 systemd-networkd[1724]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:45:08.394853 systemd-networkd[1724]: eth0: DHCP lease lost Dec 16 12:45:08.419104 waagent[2278]: 2025-12-16T12:45:08.416497Z INFO Daemon Daemon Create user account if not exists Dec 16 12:45:08.420790 waagent[2278]: 2025-12-16T12:45:08.420734Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:45:08.424717 waagent[2278]: 2025-12-16T12:45:08.424665Z INFO Daemon Daemon Configure sudoer Dec 16 12:45:08.431705 waagent[2278]: 2025-12-16T12:45:08.431638Z INFO Daemon Daemon Configure sshd Dec 16 12:45:08.432152 systemd-networkd[1724]: eth0: DHCPv4 address 10.200.20.49/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:45:08.439181 waagent[2278]: 2025-12-16T12:45:08.439113Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:45:08.447788 waagent[2278]: 2025-12-16T12:45:08.447689Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:45:09.529201 waagent[2278]: 2025-12-16T12:45:09.529154Z INFO Daemon Daemon Provisioning complete Dec 16 12:45:09.543495 waagent[2278]: 2025-12-16T12:45:09.543452Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:45:09.548287 waagent[2278]: 2025-12-16T12:45:09.548244Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:45:09.555150 waagent[2278]: 2025-12-16T12:45:09.555112Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:45:09.660125 waagent[2375]: 2025-12-16T12:45:09.659839Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:45:09.660125 waagent[2375]: 2025-12-16T12:45:09.659990Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Dec 16 12:45:09.660125 waagent[2375]: 2025-12-16T12:45:09.660031Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:45:09.660125 waagent[2375]: 2025-12-16T12:45:09.660068Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:45:09.701120 waagent[2375]: 2025-12-16T12:45:09.700586Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:45:09.701120 waagent[2375]: 2025-12-16T12:45:09.700808Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:45:09.701120 waagent[2375]: 2025-12-16T12:45:09.700856Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:45:09.707330 waagent[2375]: 2025-12-16T12:45:09.707271Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:45:09.712542 waagent[2375]: 2025-12-16T12:45:09.712507Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:45:09.712988 waagent[2375]: 2025-12-16T12:45:09.712952Z INFO ExtHandler Dec 16 12:45:09.713041 waagent[2375]: 2025-12-16T12:45:09.713023Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 72f8b8da-9064-4d63-80a8-c111cab6ad27 eTag: 17016985074182774708 source: Fabric] Dec 16 12:45:09.713306 waagent[2375]: 2025-12-16T12:45:09.713278Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:45:09.713724 waagent[2375]: 2025-12-16T12:45:09.713693Z INFO ExtHandler Dec 16 12:45:09.713764 waagent[2375]: 2025-12-16T12:45:09.713748Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:45:09.720207 waagent[2375]: 2025-12-16T12:45:09.720177Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:45:09.777312 waagent[2375]: 2025-12-16T12:45:09.777237Z INFO ExtHandler Downloaded certificate {'thumbprint': '3D6838DFCA9B01FCB302448FF99931878C1509D4', 'hasPrivateKey': True} Dec 16 12:45:09.777727 waagent[2375]: 2025-12-16T12:45:09.777693Z INFO ExtHandler Fetch goal state completed Dec 16 12:45:09.796169 waagent[2375]: 2025-12-16T12:45:09.796027Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Dec 16 12:45:09.799845 waagent[2375]: 2025-12-16T12:45:09.799778Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2375 Dec 16 12:45:09.799976 waagent[2375]: 2025-12-16T12:45:09.799939Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:45:09.800291 waagent[2375]: 2025-12-16T12:45:09.800260Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:45:09.801408 waagent[2375]: 2025-12-16T12:45:09.801370Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:45:09.801743 waagent[2375]: 2025-12-16T12:45:09.801712Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:45:09.801869 waagent[2375]: 2025-12-16T12:45:09.801846Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:45:09.802335 waagent[2375]: 2025-12-16T12:45:09.802302Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:45:09.876112 waagent[2375]: 2025-12-16T12:45:09.875966Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:45:09.876246 waagent[2375]: 2025-12-16T12:45:09.876212Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:45:09.881140 waagent[2375]: 2025-12-16T12:45:09.880975Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:45:09.886273 systemd[1]: Reload requested from client PID 2390 ('systemctl') (unit waagent.service)... Dec 16 12:45:09.886523 systemd[1]: Reloading... Dec 16 12:45:09.944188 zram_generator::config[2429]: No configuration found. Dec 16 12:45:10.124232 systemd[1]: Reloading finished in 237 ms. Dec 16 12:45:10.153181 waagent[2375]: 2025-12-16T12:45:10.151317Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:45:10.153181 waagent[2375]: 2025-12-16T12:45:10.151476Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:45:10.388375 waagent[2375]: 2025-12-16T12:45:10.388254Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:45:10.388602 waagent[2375]: 2025-12-16T12:45:10.388568Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:45:10.389238 waagent[2375]: 2025-12-16T12:45:10.389195Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:45:10.389498 waagent[2375]: 2025-12-16T12:45:10.389461Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:45:10.390303 waagent[2375]: 2025-12-16T12:45:10.389689Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:45:10.390303 waagent[2375]: 2025-12-16T12:45:10.389757Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:45:10.390303 waagent[2375]: 2025-12-16T12:45:10.389912Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:45:10.390303 waagent[2375]: 2025-12-16T12:45:10.390049Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:45:10.390303 waagent[2375]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:45:10.390303 waagent[2375]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:45:10.390303 waagent[2375]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:45:10.390303 waagent[2375]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:45:10.390303 waagent[2375]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:45:10.390303 waagent[2375]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:45:10.390590 waagent[2375]: 2025-12-16T12:45:10.390552Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:45:10.390705 waagent[2375]: 2025-12-16T12:45:10.390669Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:45:10.390889 waagent[2375]: 2025-12-16T12:45:10.390856Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:45:10.390932 waagent[2375]: 2025-12-16T12:45:10.390913Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:45:10.391042 waagent[2375]: 2025-12-16T12:45:10.391017Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:45:10.391092 waagent[2375]: 2025-12-16T12:45:10.391066Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:45:10.391150 waagent[2375]: 2025-12-16T12:45:10.391133Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:45:10.391633 waagent[2375]: 2025-12-16T12:45:10.391608Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:45:10.392000 waagent[2375]: 2025-12-16T12:45:10.391580Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:45:10.392076 waagent[2375]: 2025-12-16T12:45:10.392052Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:45:10.399820 waagent[2375]: 2025-12-16T12:45:10.398446Z INFO ExtHandler ExtHandler Dec 16 12:45:10.399820 waagent[2375]: 2025-12-16T12:45:10.398521Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: adb20f58-a468-40cd-ab36-c7b597a665d9 correlation 9d54db00-4dda-44be-af5b-508104704c17 created: 2025-12-16T12:44:06.584343Z] Dec 16 12:45:10.399820 waagent[2375]: 2025-12-16T12:45:10.398796Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:45:10.399820 waagent[2375]: 2025-12-16T12:45:10.399215Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 12:45:10.425740 waagent[2375]: 2025-12-16T12:45:10.425688Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:45:10.425740 waagent[2375]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:45:10.426456 waagent[2375]: 2025-12-16T12:45:10.426414Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F2BF1308-6784-49BD-8439-4869360B3E06;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:45:10.493823 waagent[2375]: 2025-12-16T12:45:10.493758Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:45:10.493823 waagent[2375]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:45:10.493823 waagent[2375]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:45:10.493823 waagent[2375]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c6:13:bc brd ff:ff:ff:ff:ff:ff\ altname enx000d3ac613bc Dec 16 12:45:10.493823 waagent[2375]: 3: enP38450s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c6:13:bc brd ff:ff:ff:ff:ff:ff\ altname enP38450p0s2 Dec 16 12:45:10.493823 waagent[2375]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:45:10.493823 waagent[2375]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:45:10.493823 waagent[2375]: 2: eth0 inet 10.200.20.49/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:45:10.493823 waagent[2375]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:45:10.493823 waagent[2375]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:45:10.493823 waagent[2375]: 2: eth0 inet6 fe80::20d:3aff:fec6:13bc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:45:10.792771 waagent[2375]: 2025-12-16T12:45:10.792700Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:45:10.792771 waagent[2375]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:45:10.792771 waagent[2375]: pkts bytes target prot opt in out source destination Dec 16 12:45:10.792771 waagent[2375]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:45:10.792771 waagent[2375]: pkts bytes target prot opt in out source destination Dec 16 12:45:10.792771 waagent[2375]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:45:10.792771 waagent[2375]: pkts bytes target prot opt in out source destination Dec 16 12:45:10.792771 waagent[2375]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:45:10.792771 waagent[2375]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:45:10.792771 waagent[2375]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:45:10.795238 waagent[2375]: 2025-12-16T12:45:10.795185Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:45:10.795238 waagent[2375]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:45:10.795238 waagent[2375]: pkts bytes target prot opt in out source destination Dec 16 12:45:10.795238 waagent[2375]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:45:10.795238 waagent[2375]: pkts bytes target prot opt in out source destination Dec 16 12:45:10.795238 waagent[2375]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:45:10.795238 waagent[2375]: pkts bytes target prot opt in out source destination Dec 16 12:45:10.795238 waagent[2375]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:45:10.795238 waagent[2375]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:45:10.795238 waagent[2375]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:45:10.795454 waagent[2375]: 2025-12-16T12:45:10.795427Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:45:17.014813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:45:17.016597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:17.129898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:17.140465 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:17.227412 kubelet[2529]: E1216 12:45:17.227336 2529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:17.230394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:17.230637 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:17.231281 systemd[1]: kubelet.service: Consumed 176ms CPU time, 105.7M memory peak. Dec 16 12:45:22.994287 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:45:22.996290 systemd[1]: Started sshd@0-10.200.20.49:22-10.200.16.10:35346.service - OpenSSH per-connection server daemon (10.200.16.10:35346). Dec 16 12:45:23.579692 sshd[2537]: Accepted publickey for core from 10.200.16.10 port 35346 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:23.580790 sshd-session[2537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:23.585288 systemd-logind[2108]: New session 3 of user core. Dec 16 12:45:23.591272 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:45:23.873113 systemd[1]: Started sshd@1-10.200.20.49:22-10.200.16.10:35356.service - OpenSSH per-connection server daemon (10.200.16.10:35356). Dec 16 12:45:24.268385 sshd[2543]: Accepted publickey for core from 10.200.16.10 port 35356 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:24.269464 sshd-session[2543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:24.273378 systemd-logind[2108]: New session 4 of user core. Dec 16 12:45:24.284316 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:45:24.482655 sshd[2546]: Connection closed by 10.200.16.10 port 35356 Dec 16 12:45:24.482563 sshd-session[2543]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:24.486185 systemd-logind[2108]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:45:24.486464 systemd[1]: sshd@1-10.200.20.49:22-10.200.16.10:35356.service: Deactivated successfully. Dec 16 12:45:24.487884 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:45:24.491685 systemd-logind[2108]: Removed session 4. Dec 16 12:45:24.575748 systemd[1]: Started sshd@2-10.200.20.49:22-10.200.16.10:35360.service - OpenSSH per-connection server daemon (10.200.16.10:35360). Dec 16 12:45:25.005192 sshd[2552]: Accepted publickey for core from 10.200.16.10 port 35360 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:25.006220 sshd-session[2552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:25.010326 systemd-logind[2108]: New session 5 of user core. Dec 16 12:45:25.017337 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:45:25.235798 sshd[2555]: Connection closed by 10.200.16.10 port 35360 Dec 16 12:45:25.236402 sshd-session[2552]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:25.240060 systemd[1]: sshd@2-10.200.20.49:22-10.200.16.10:35360.service: Deactivated successfully. Dec 16 12:45:25.241787 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:45:25.242538 systemd-logind[2108]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:45:25.244340 systemd-logind[2108]: Removed session 5. Dec 16 12:45:25.318319 systemd[1]: Started sshd@3-10.200.20.49:22-10.200.16.10:35364.service - OpenSSH per-connection server daemon (10.200.16.10:35364). Dec 16 12:45:25.709457 sshd[2561]: Accepted publickey for core from 10.200.16.10 port 35364 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:25.710507 sshd-session[2561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:25.714907 systemd-logind[2108]: New session 6 of user core. Dec 16 12:45:25.726284 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:45:25.924613 sshd[2564]: Connection closed by 10.200.16.10 port 35364 Dec 16 12:45:25.925212 sshd-session[2561]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:25.929384 systemd-logind[2108]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:45:25.929958 systemd[1]: sshd@3-10.200.20.49:22-10.200.16.10:35364.service: Deactivated successfully. Dec 16 12:45:25.931711 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:45:25.934300 systemd-logind[2108]: Removed session 6. Dec 16 12:45:26.009227 systemd[1]: Started sshd@4-10.200.20.49:22-10.200.16.10:35380.service - OpenSSH per-connection server daemon (10.200.16.10:35380). Dec 16 12:45:26.400039 sshd[2570]: Accepted publickey for core from 10.200.16.10 port 35380 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:26.401222 sshd-session[2570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:26.405239 systemd-logind[2108]: New session 7 of user core. Dec 16 12:45:26.411259 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:45:26.651334 sudo[2574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:45:26.651554 sudo[2574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:26.677531 sudo[2574]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:26.750806 sshd[2573]: Connection closed by 10.200.16.10 port 35380 Dec 16 12:45:26.749825 sshd-session[2570]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:26.753210 systemd-logind[2108]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:45:26.753958 systemd[1]: sshd@4-10.200.20.49:22-10.200.16.10:35380.service: Deactivated successfully. Dec 16 12:45:26.755656 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:45:26.757789 systemd-logind[2108]: Removed session 7. Dec 16 12:45:26.850659 systemd[1]: Started sshd@5-10.200.20.49:22-10.200.16.10:35396.service - OpenSSH per-connection server daemon (10.200.16.10:35396). Dec 16 12:45:27.264861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:45:27.266269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:27.281580 sshd[2580]: Accepted publickey for core from 10.200.16.10 port 35396 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:27.283051 sshd-session[2580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:27.287022 systemd-logind[2108]: New session 8 of user core. Dec 16 12:45:27.288788 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:45:27.376600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:27.379843 (kubelet)[2592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:27.439876 sudo[2599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:45:27.440119 sudo[2599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:27.479925 kubelet[2592]: E1216 12:45:27.479868 2592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:27.482114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:27.482230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:27.484180 systemd[1]: kubelet.service: Consumed 116ms CPU time, 105.1M memory peak. Dec 16 12:45:27.809507 sudo[2599]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:27.814724 sudo[2598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:45:27.814937 sudo[2598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:27.824710 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:45:27.857000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:45:27.861670 kernel: kauditd_printk_skb: 98 callbacks suppressed Dec 16 12:45:27.861726 kernel: audit: type=1305 audit(1765889127.857:259): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:45:27.861964 augenrules[2622]: No rules Dec 16 12:45:27.857000 audit[2622]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd75306d0 a2=420 a3=0 items=0 ppid=2603 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:27.870914 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:45:27.871302 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:45:27.885442 kernel: audit: type=1300 audit(1765889127.857:259): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd75306d0 a2=420 a3=0 items=0 ppid=2603 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:27.885717 sudo[2598]: pam_unix(sudo:session): session closed for user root Dec 16 12:45:27.857000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:45:27.894557 kernel: audit: type=1327 audit(1765889127.857:259): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:45:27.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.906685 kernel: audit: type=1130 audit(1765889127.868:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.918405 kernel: audit: type=1131 audit(1765889127.868:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.918482 kernel: audit: type=1106 audit(1765889127.884:262): pid=2598 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.884000 audit[2598]: USER_END pid=2598 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.884000 audit[2598]: CRED_DISP pid=2598 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.943458 kernel: audit: type=1104 audit(1765889127.884:263): pid=2598 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.970458 sshd[2586]: Connection closed by 10.200.16.10 port 35396 Dec 16 12:45:27.970988 sshd-session[2580]: pam_unix(sshd:session): session closed for user core Dec 16 12:45:27.971000 audit[2580]: USER_END pid=2580 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:27.990820 systemd[1]: sshd@5-10.200.20.49:22-10.200.16.10:35396.service: Deactivated successfully. Dec 16 12:45:27.971000 audit[2580]: CRED_DISP pid=2580 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:27.993454 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:45:27.994667 systemd-logind[2108]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:45:27.997464 systemd-logind[2108]: Removed session 8. Dec 16 12:45:28.005733 kernel: audit: type=1106 audit(1765889127.971:264): pid=2580 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:28.005824 kernel: audit: type=1104 audit(1765889127.971:265): pid=2580 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:28.005857 kernel: audit: type=1131 audit(1765889127.989:266): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.49:22-10.200.16.10:35396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:27.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.49:22-10.200.16.10:35396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:28.093909 systemd[1]: Started sshd@6-10.200.20.49:22-10.200.16.10:35398.service - OpenSSH per-connection server daemon (10.200.16.10:35398). Dec 16 12:45:28.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.49:22-10.200.16.10:35398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:28.520000 audit[2631]: USER_ACCT pid=2631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:28.522844 sshd[2631]: Accepted publickey for core from 10.200.16.10 port 35398 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:45:28.522000 audit[2631]: CRED_ACQ pid=2631 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:28.522000 audit[2631]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdcb2be30 a2=3 a3=0 items=0 ppid=1 pid=2631 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:28.522000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:45:28.523589 sshd-session[2631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:45:28.527656 systemd-logind[2108]: New session 9 of user core. Dec 16 12:45:28.542471 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:45:28.543000 audit[2631]: USER_START pid=2631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:28.545000 audit[2634]: CRED_ACQ pid=2634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:45:28.678000 audit[2635]: USER_ACCT pid=2635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:28.679936 sudo[2635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:45:28.678000 audit[2635]: CRED_REFR pid=2635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:28.680196 sudo[2635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:45:28.680000 audit[2635]: USER_START pid=2635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:45:29.091585 chronyd[2087]: Selected source PHC0 Dec 16 12:45:30.017470 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:45:30.027402 (dockerd)[2654]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:45:31.343130 dockerd[2654]: time="2025-12-16T12:45:31.342362709Z" level=info msg="Starting up" Dec 16 12:45:31.343695 dockerd[2654]: time="2025-12-16T12:45:31.343669693Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:45:31.353214 dockerd[2654]: time="2025-12-16T12:45:31.353163493Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:45:31.436352 systemd[1]: var-lib-docker-metacopy\x2dcheck874397857-merged.mount: Deactivated successfully. Dec 16 12:45:31.450140 dockerd[2654]: time="2025-12-16T12:45:31.449908637Z" level=info msg="Loading containers: start." Dec 16 12:45:31.476135 kernel: Initializing XFRM netlink socket Dec 16 12:45:31.512000 audit[2700]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.512000 audit[2700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdb2c4ee0 a2=0 a3=0 items=0 ppid=2654 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.512000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:45:31.514000 audit[2702]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.514000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff3ecc860 a2=0 a3=0 items=0 ppid=2654 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.514000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:45:31.516000 audit[2704]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.516000 audit[2704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdecdcc40 a2=0 a3=0 items=0 ppid=2654 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:45:31.517000 audit[2706]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.517000 audit[2706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde8d04a0 a2=0 a3=0 items=0 ppid=2654 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.517000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:45:31.519000 audit[2708]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=2708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.519000 audit[2708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffd9582c0 a2=0 a3=0 items=0 ppid=2654 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.519000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:45:31.521000 audit[2710]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=2710 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.521000 audit[2710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe3aa4e00 a2=0 a3=0 items=0 ppid=2654 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.521000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:31.522000 audit[2712]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2712 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.522000 audit[2712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc5656a70 a2=0 a3=0 items=0 ppid=2654 pid=2712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.522000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:31.524000 audit[2714]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=2714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.524000 audit[2714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdd9a4690 a2=0 a3=0 items=0 ppid=2654 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.524000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:45:31.558000 audit[2717]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.558000 audit[2717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffe41a3db0 a2=0 a3=0 items=0 ppid=2654 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:45:31.559000 audit[2719]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=2719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.559000 audit[2719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe641f770 a2=0 a3=0 items=0 ppid=2654 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:45:31.561000 audit[2721]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.561000 audit[2721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe1518b80 a2=0 a3=0 items=0 ppid=2654 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:45:31.563000 audit[2723]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.563000 audit[2723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe6262ea0 a2=0 a3=0 items=0 ppid=2654 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.563000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:31.565000 audit[2725]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.565000 audit[2725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffe604140 a2=0 a3=0 items=0 ppid=2654 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:45:31.839000 audit[2755]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=2755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.839000 audit[2755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc3f81340 a2=0 a3=0 items=0 ppid=2654 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.839000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:45:31.841000 audit[2757]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=2757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.841000 audit[2757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffee5927a0 a2=0 a3=0 items=0 ppid=2654 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.841000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:45:31.842000 audit[2759]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2759 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.842000 audit[2759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5b674f0 a2=0 a3=0 items=0 ppid=2654 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:45:31.844000 audit[2761]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.844000 audit[2761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc2d1570 a2=0 a3=0 items=0 ppid=2654 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.844000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:45:31.846000 audit[2763]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=2763 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.846000 audit[2763]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff69c7bc0 a2=0 a3=0 items=0 ppid=2654 pid=2763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.846000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:45:31.847000 audit[2765]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=2765 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.847000 audit[2765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd41f0e40 a2=0 a3=0 items=0 ppid=2654 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:31.849000 audit[2767]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=2767 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.849000 audit[2767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe3292b90 a2=0 a3=0 items=0 ppid=2654 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:31.851000 audit[2769]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=2769 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.851000 audit[2769]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffead017b0 a2=0 a3=0 items=0 ppid=2654 pid=2769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.851000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:45:31.852000 audit[2771]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=2771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.852000 audit[2771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffc66ffc30 a2=0 a3=0 items=0 ppid=2654 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:45:31.854000 audit[2773]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=2773 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.854000 audit[2773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc4be14d0 a2=0 a3=0 items=0 ppid=2654 pid=2773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:45:31.856000 audit[2775]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=2775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.856000 audit[2775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd5488010 a2=0 a3=0 items=0 ppid=2654 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:45:31.857000 audit[2777]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=2777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.857000 audit[2777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc466fbe0 a2=0 a3=0 items=0 ppid=2654 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:45:31.860000 audit[2779]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=2779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.860000 audit[2779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffc5a9990 a2=0 a3=0 items=0 ppid=2654 pid=2779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:45:31.865000 audit[2784]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=2784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.865000 audit[2784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffd175340 a2=0 a3=0 items=0 ppid=2654 pid=2784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:45:31.867000 audit[2786]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.867000 audit[2786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffca674ff0 a2=0 a3=0 items=0 ppid=2654 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.867000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:45:31.869000 audit[2788]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.869000 audit[2788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdee1feb0 a2=0 a3=0 items=0 ppid=2654 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:45:31.871000 audit[2790]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2790 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.871000 audit[2790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee0a0be0 a2=0 a3=0 items=0 ppid=2654 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.871000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:45:31.872000 audit[2792]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=2792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.872000 audit[2792]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcb44f0c0 a2=0 a3=0 items=0 ppid=2654 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:45:31.874000 audit[2794]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=2794 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:31.874000 audit[2794]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff499ba60 a2=0 a3=0 items=0 ppid=2654 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:45:31.936000 audit[2799]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=2799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.936000 audit[2799]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffffdde8a00 a2=0 a3=0 items=0 ppid=2654 pid=2799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:45:31.938000 audit[2801]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.938000 audit[2801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffff9be7f0 a2=0 a3=0 items=0 ppid=2654 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.938000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:45:31.945000 audit[2809]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.945000 audit[2809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff5bafee0 a2=0 a3=0 items=0 ppid=2654 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:45:31.950000 audit[2814]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.950000 audit[2814]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffff003a20 a2=0 a3=0 items=0 ppid=2654 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.950000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:45:31.952000 audit[2816]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2816 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.952000 audit[2816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffffa4bcb50 a2=0 a3=0 items=0 ppid=2654 pid=2816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.952000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:45:31.954000 audit[2818]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2818 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.954000 audit[2818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd8205dd0 a2=0 a3=0 items=0 ppid=2654 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:45:31.956000 audit[2820]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.956000 audit[2820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc820f420 a2=0 a3=0 items=0 ppid=2654 pid=2820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:45:31.958000 audit[2822]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2822 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:31.958000 audit[2822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdacd5eb0 a2=0 a3=0 items=0 ppid=2654 pid=2822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:31.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:45:31.960068 systemd-networkd[1724]: docker0: Link UP Dec 16 12:45:31.982485 dockerd[2654]: time="2025-12-16T12:45:31.982436701Z" level=info msg="Loading containers: done." Dec 16 12:45:31.992958 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2438975550-merged.mount: Deactivated successfully. Dec 16 12:45:32.050822 dockerd[2654]: time="2025-12-16T12:45:32.050479965Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:45:32.050822 dockerd[2654]: time="2025-12-16T12:45:32.050568941Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:45:32.050822 dockerd[2654]: time="2025-12-16T12:45:32.050673957Z" level=info msg="Initializing buildkit" Dec 16 12:45:32.111207 dockerd[2654]: time="2025-12-16T12:45:32.111078269Z" level=info msg="Completed buildkit initialization" Dec 16 12:45:32.117394 dockerd[2654]: time="2025-12-16T12:45:32.117338005Z" level=info msg="Daemon has completed initialization" Dec 16 12:45:32.118638 dockerd[2654]: time="2025-12-16T12:45:32.117568333Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:45:32.118347 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:45:32.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:32.935897 containerd[2141]: time="2025-12-16T12:45:32.935798789Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:45:33.985502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4192319172.mount: Deactivated successfully. Dec 16 12:45:34.904222 containerd[2141]: time="2025-12-16T12:45:34.904162253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:34.906613 containerd[2141]: time="2025-12-16T12:45:34.906559965Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=25743710" Dec 16 12:45:34.909032 containerd[2141]: time="2025-12-16T12:45:34.908980989Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:34.912487 containerd[2141]: time="2025-12-16T12:45:34.912412517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:34.913005 containerd[2141]: time="2025-12-16T12:45:34.912978037Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.977131024s" Dec 16 12:45:34.913112 containerd[2141]: time="2025-12-16T12:45:34.913100189Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:45:34.913740 containerd[2141]: time="2025-12-16T12:45:34.913712565Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:45:36.157030 containerd[2141]: time="2025-12-16T12:45:36.156370925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:36.158672 containerd[2141]: time="2025-12-16T12:45:36.158627541Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 16 12:45:36.161193 containerd[2141]: time="2025-12-16T12:45:36.161165629Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:36.166039 containerd[2141]: time="2025-12-16T12:45:36.166005949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:36.166714 containerd[2141]: time="2025-12-16T12:45:36.166677997Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.252933552s" Dec 16 12:45:36.166815 containerd[2141]: time="2025-12-16T12:45:36.166803181Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:45:36.167530 containerd[2141]: time="2025-12-16T12:45:36.167505933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:45:37.274793 containerd[2141]: time="2025-12-16T12:45:37.274728368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:37.277883 containerd[2141]: time="2025-12-16T12:45:37.277831582Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17613052" Dec 16 12:45:37.282473 containerd[2141]: time="2025-12-16T12:45:37.282424862Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:37.286870 containerd[2141]: time="2025-12-16T12:45:37.286813234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:37.287582 containerd[2141]: time="2025-12-16T12:45:37.287393300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.119776662s" Dec 16 12:45:37.287582 containerd[2141]: time="2025-12-16T12:45:37.287419079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:45:37.288115 containerd[2141]: time="2025-12-16T12:45:37.288098311Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:45:37.514730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:45:37.515965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:38.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:38.041489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:38.045700 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 12:45:38.045798 kernel: audit: type=1130 audit(1765889138.040:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:38.059841 (kubelet)[2937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:45:38.091904 kubelet[2937]: E1216 12:45:38.091852 2937 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:45:38.094144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:45:38.094258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:45:38.096181 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.6M memory peak. Dec 16 12:45:38.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:38.109159 kernel: audit: type=1131 audit(1765889138.095:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:39.732981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997364186.mount: Deactivated successfully. Dec 16 12:45:40.026992 containerd[2141]: time="2025-12-16T12:45:40.026528926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:40.029129 containerd[2141]: time="2025-12-16T12:45:40.029071391Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=9843275" Dec 16 12:45:40.033149 containerd[2141]: time="2025-12-16T12:45:40.033118199Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:40.037013 containerd[2141]: time="2025-12-16T12:45:40.036966603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:40.037856 containerd[2141]: time="2025-12-16T12:45:40.037531417Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 2.749326868s" Dec 16 12:45:40.037856 containerd[2141]: time="2025-12-16T12:45:40.037560066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:45:40.038290 containerd[2141]: time="2025-12-16T12:45:40.038256308Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:45:40.691143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1761175406.mount: Deactivated successfully. Dec 16 12:45:41.152167 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:45:41.665653 containerd[2141]: time="2025-12-16T12:45:41.665588227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:41.668163 containerd[2141]: time="2025-12-16T12:45:41.668090132Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956550" Dec 16 12:45:41.672392 containerd[2141]: time="2025-12-16T12:45:41.671871253Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:41.677643 containerd[2141]: time="2025-12-16T12:45:41.677589177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:41.678760 containerd[2141]: time="2025-12-16T12:45:41.678714374Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.640044527s" Dec 16 12:45:41.679211 containerd[2141]: time="2025-12-16T12:45:41.679155473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:45:41.680026 containerd[2141]: time="2025-12-16T12:45:41.679933821Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:45:42.244367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306758068.mount: Deactivated successfully. Dec 16 12:45:42.265768 containerd[2141]: time="2025-12-16T12:45:42.265712959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:42.270157 containerd[2141]: time="2025-12-16T12:45:42.270075663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:45:42.273406 containerd[2141]: time="2025-12-16T12:45:42.273377076Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:42.277783 containerd[2141]: time="2025-12-16T12:45:42.277748013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:45:42.278177 containerd[2141]: time="2025-12-16T12:45:42.278147847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 598.182017ms" Dec 16 12:45:42.278305 containerd[2141]: time="2025-12-16T12:45:42.278240817Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:45:42.278771 containerd[2141]: time="2025-12-16T12:45:42.278745262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:45:43.008451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331802348.mount: Deactivated successfully. Dec 16 12:45:44.884319 containerd[2141]: time="2025-12-16T12:45:44.884262553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:44.887936 containerd[2141]: time="2025-12-16T12:45:44.887882470Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66164527" Dec 16 12:45:44.891164 containerd[2141]: time="2025-12-16T12:45:44.891134826Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:44.896303 containerd[2141]: time="2025-12-16T12:45:44.896246478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:45:44.896817 containerd[2141]: time="2025-12-16T12:45:44.896664032Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.617890025s" Dec 16 12:45:44.896817 containerd[2141]: time="2025-12-16T12:45:44.896695009Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:45:46.819450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:46.819951 systemd[1]: kubelet.service: Consumed 118ms CPU time, 105.6M memory peak. Dec 16 12:45:46.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:46.829334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:46.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:46.845213 kernel: audit: type=1130 audit(1765889146.818:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:46.845328 kernel: audit: type=1131 audit(1765889146.818:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:46.862292 systemd[1]: Reload requested from client PID 3089 ('systemctl') (unit session-9.scope)... Dec 16 12:45:46.862310 systemd[1]: Reloading... Dec 16 12:45:46.961973 zram_generator::config[3134]: No configuration found. Dec 16 12:45:47.138203 systemd[1]: Reloading finished in 275 ms. Dec 16 12:45:47.151000 audit: BPF prog-id=87 op=LOAD Dec 16 12:45:47.151000 audit: BPF prog-id=88 op=LOAD Dec 16 12:45:47.151000 audit: BPF prog-id=89 op=LOAD Dec 16 12:45:47.151000 audit: BPF prog-id=90 op=LOAD Dec 16 12:45:47.153000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:45:47.153000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:45:47.153000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:45:47.153000 audit: BPF prog-id=80 op=UNLOAD Dec 16 12:45:47.156000 audit: BPF prog-id=91 op=LOAD Dec 16 12:45:47.156000 audit: BPF prog-id=92 op=LOAD Dec 16 12:45:47.156000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:45:47.156000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:45:47.161834 kernel: audit: type=1334 audit(1765889147.151:321): prog-id=87 op=LOAD Dec 16 12:45:47.161896 kernel: audit: type=1334 audit(1765889147.151:322): prog-id=88 op=LOAD Dec 16 12:45:47.162364 kernel: audit: type=1334 audit(1765889147.151:323): prog-id=89 op=LOAD Dec 16 12:45:47.162396 kernel: audit: type=1334 audit(1765889147.151:324): prog-id=90 op=LOAD Dec 16 12:45:47.162408 kernel: audit: type=1334 audit(1765889147.153:325): prog-id=72 op=UNLOAD Dec 16 12:45:47.162424 kernel: audit: type=1334 audit(1765889147.153:326): prog-id=73 op=UNLOAD Dec 16 12:45:47.162437 kernel: audit: type=1334 audit(1765889147.153:327): prog-id=74 op=UNLOAD Dec 16 12:45:47.162454 kernel: audit: type=1334 audit(1765889147.153:328): prog-id=80 op=UNLOAD Dec 16 12:45:47.160000 audit: BPF prog-id=93 op=LOAD Dec 16 12:45:47.160000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:45:47.160000 audit: BPF prog-id=94 op=LOAD Dec 16 12:45:47.161000 audit: BPF prog-id=95 op=LOAD Dec 16 12:45:47.161000 audit: BPF prog-id=85 op=UNLOAD Dec 16 12:45:47.161000 audit: BPF prog-id=86 op=UNLOAD Dec 16 12:45:47.164000 audit: BPF prog-id=96 op=LOAD Dec 16 12:45:47.164000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:45:47.177000 audit: BPF prog-id=97 op=LOAD Dec 16 12:45:47.177000 audit: BPF prog-id=76 op=UNLOAD Dec 16 12:45:47.177000 audit: BPF prog-id=98 op=LOAD Dec 16 12:45:47.177000 audit: BPF prog-id=99 op=LOAD Dec 16 12:45:47.177000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:45:47.177000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:45:47.177000 audit: BPF prog-id=100 op=LOAD Dec 16 12:45:47.177000 audit: BPF prog-id=81 op=UNLOAD Dec 16 12:45:47.178000 audit: BPF prog-id=101 op=LOAD Dec 16 12:45:47.178000 audit: BPF prog-id=102 op=LOAD Dec 16 12:45:47.178000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:45:47.178000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:45:47.178000 audit: BPF prog-id=103 op=LOAD Dec 16 12:45:47.178000 audit: BPF prog-id=75 op=UNLOAD Dec 16 12:45:47.178000 audit: BPF prog-id=104 op=LOAD Dec 16 12:45:47.178000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:45:47.182000 audit: BPF prog-id=105 op=LOAD Dec 16 12:45:47.182000 audit: BPF prog-id=106 op=LOAD Dec 16 12:45:47.182000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:45:47.182000 audit: BPF prog-id=71 op=UNLOAD Dec 16 12:45:47.195645 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:45:47.195714 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:45:47.196039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:47.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:45:47.196107 systemd[1]: kubelet.service: Consumed 106ms CPU time, 95M memory peak. Dec 16 12:45:47.197848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:48.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:48.042020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:48.051551 (kubelet)[3203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:45:48.085281 kubelet[3203]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:48.085281 kubelet[3203]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:45:48.085281 kubelet[3203]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:48.085687 kubelet[3203]: I1216 12:45:48.085306 3203 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:45:48.322195 kubelet[3203]: I1216 12:45:48.321028 3203 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:45:48.322195 kubelet[3203]: I1216 12:45:48.321066 3203 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:45:48.322195 kubelet[3203]: I1216 12:45:48.321301 3203 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:45:48.342252 kubelet[3203]: E1216 12:45:48.342207 3203 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.49:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:45:48.343448 kubelet[3203]: I1216 12:45:48.343415 3203 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:45:48.352568 kubelet[3203]: I1216 12:45:48.352540 3203 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:45:48.355438 kubelet[3203]: I1216 12:45:48.355409 3203 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:45:48.357053 kubelet[3203]: I1216 12:45:48.357004 3203 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:45:48.357337 kubelet[3203]: I1216 12:45:48.357183 3203 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-a4975b77c5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:45:48.357466 kubelet[3203]: I1216 12:45:48.357454 3203 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:45:48.357513 kubelet[3203]: I1216 12:45:48.357507 3203 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:45:48.357690 kubelet[3203]: I1216 12:45:48.357677 3203 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:48.360404 kubelet[3203]: I1216 12:45:48.360276 3203 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:45:48.360404 kubelet[3203]: I1216 12:45:48.360305 3203 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:45:48.360404 kubelet[3203]: I1216 12:45:48.360331 3203 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:45:48.360404 kubelet[3203]: I1216 12:45:48.360340 3203 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:45:48.362769 kubelet[3203]: W1216 12:45:48.362668 3203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-a4975b77c5&limit=500&resourceVersion=0": dial tcp 10.200.20.49:6443: connect: connection refused Dec 16 12:45:48.362769 kubelet[3203]: E1216 12:45:48.362725 3203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-a4975b77c5&limit=500&resourceVersion=0\": dial tcp 10.200.20.49:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:45:48.363757 kubelet[3203]: W1216 12:45:48.363680 3203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.49:6443: connect: connection refused Dec 16 12:45:48.363757 kubelet[3203]: E1216 12:45:48.363725 3203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.49:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.49:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:45:48.363857 kubelet[3203]: I1216 12:45:48.363837 3203 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:45:48.364200 kubelet[3203]: I1216 12:45:48.364181 3203 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:45:48.364246 kubelet[3203]: W1216 12:45:48.364237 3203 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:45:48.364753 kubelet[3203]: I1216 12:45:48.364720 3203 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:45:48.364753 kubelet[3203]: I1216 12:45:48.364758 3203 server.go:1287] "Started kubelet" Dec 16 12:45:48.369261 kubelet[3203]: I1216 12:45:48.368876 3203 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:45:48.369608 kubelet[3203]: E1216 12:45:48.369514 3203 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.49:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-a-a4975b77c5.1881b2d308ff5ac0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-a-a4975b77c5,UID:ci-4515.1.0-a-a4975b77c5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-a-a4975b77c5,},FirstTimestamp:2025-12-16 12:45:48.364741312 +0000 UTC m=+0.307048219,LastTimestamp:2025-12-16 12:45:48.364741312 +0000 UTC m=+0.307048219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-a-a4975b77c5,}" Dec 16 12:45:48.371158 kubelet[3203]: I1216 12:45:48.371128 3203 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:45:48.370000 audit[3214]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:48.370000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe77f6b20 a2=0 a3=0 items=0 ppid=3203 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:45:48.372169 kubelet[3203]: I1216 12:45:48.372151 3203 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:45:48.371000 audit[3215]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:48.371000 audit[3215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffbcfdb50 a2=0 a3=0 items=0 ppid=3203 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.371000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:45:48.373547 kubelet[3203]: I1216 12:45:48.373527 3203 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:45:48.373733 kubelet[3203]: E1216 12:45:48.373713 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:48.373816 kubelet[3203]: I1216 12:45:48.373770 3203 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:45:48.374072 kubelet[3203]: I1216 12:45:48.374055 3203 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:45:48.374508 kubelet[3203]: I1216 12:45:48.374474 3203 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:45:48.373000 audit[3217]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:48.373000 audit[3217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd330a030 a2=0 a3=0 items=0 ppid=3203 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.373000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:48.375154 kubelet[3203]: E1216 12:45:48.375119 3203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-a4975b77c5?timeout=10s\": dial tcp 10.200.20.49:6443: connect: connection refused" interval="200ms" Dec 16 12:45:48.375584 kubelet[3203]: I1216 12:45:48.375572 3203 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:45:48.375733 kubelet[3203]: I1216 12:45:48.375684 3203 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:45:48.376148 kubelet[3203]: E1216 12:45:48.376127 3203 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:45:48.376348 kubelet[3203]: I1216 12:45:48.376330 3203 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:45:48.376475 kubelet[3203]: I1216 12:45:48.376460 3203 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:45:48.376000 audit[3219]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:48.376000 audit[3219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd6cb4450 a2=0 a3=0 items=0 ppid=3203 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:45:48.377861 kubelet[3203]: I1216 12:45:48.377845 3203 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:45:48.380855 kubelet[3203]: W1216 12:45:48.380794 3203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.49:6443: connect: connection refused Dec 16 12:45:48.380855 kubelet[3203]: E1216 12:45:48.380851 3203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.49:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:45:48.402000 audit[3225]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:48.402000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff27c6fe0 a2=0 a3=0 items=0 ppid=3203 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:45:48.405024 kubelet[3203]: I1216 12:45:48.405003 3203 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:45:48.405024 kubelet[3203]: I1216 12:45:48.405018 3203 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:45:48.405143 kubelet[3203]: I1216 12:45:48.405038 3203 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:48.405261 kubelet[3203]: I1216 12:45:48.405197 3203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:45:48.405000 audit[3228]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:48.405000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdb2b1bf0 a2=0 a3=0 items=0 ppid=3203 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:45:48.405000 audit[3229]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:48.405000 audit[3229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff342b950 a2=0 a3=0 items=0 ppid=3203 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.405000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:45:48.407000 audit[3230]: NETFILTER_CFG table=mangle:52 family=10 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:48.407000 audit[3230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdca12ed0 a2=0 a3=0 items=0 ppid=3203 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:45:48.408000 audit[3231]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:48.408000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe396e360 a2=0 a3=0 items=0 ppid=3203 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:45:48.409727 kubelet[3203]: I1216 12:45:48.406967 3203 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:45:48.409727 kubelet[3203]: I1216 12:45:48.407158 3203 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:45:48.409727 kubelet[3203]: I1216 12:45:48.407181 3203 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:45:48.409727 kubelet[3203]: I1216 12:45:48.407188 3203 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:45:48.409727 kubelet[3203]: E1216 12:45:48.407219 3203 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:45:48.409727 kubelet[3203]: W1216 12:45:48.408300 3203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.49:6443: connect: connection refused Dec 16 12:45:48.409727 kubelet[3203]: E1216 12:45:48.408348 3203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.49:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:45:48.410456 kubelet[3203]: I1216 12:45:48.410433 3203 policy_none.go:49] "None policy: Start" Dec 16 12:45:48.410456 kubelet[3203]: I1216 12:45:48.410460 3203 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:45:48.410541 kubelet[3203]: I1216 12:45:48.410470 3203 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:45:48.409000 audit[3232]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:48.409000 audit[3232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd2e21c0 a2=0 a3=0 items=0 ppid=3203 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:45:48.410000 audit[3233]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:45:48.410000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffedb89f50 a2=0 a3=0 items=0 ppid=3203 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:45:48.411000 audit[3234]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:45:48.411000 audit[3234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9119a80 a2=0 a3=0 items=0 ppid=3203 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:48.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:45:48.418694 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:45:48.430261 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:45:48.433810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:45:48.449075 kubelet[3203]: I1216 12:45:48.448864 3203 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:45:48.449195 kubelet[3203]: I1216 12:45:48.449078 3203 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:45:48.449195 kubelet[3203]: I1216 12:45:48.449127 3203 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:45:48.449908 kubelet[3203]: I1216 12:45:48.449559 3203 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:45:48.450389 kubelet[3203]: E1216 12:45:48.450363 3203 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:45:48.450435 kubelet[3203]: E1216 12:45:48.450427 3203 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:48.518374 systemd[1]: Created slice kubepods-burstable-podbf9761a59b5892088a6bdc70eef89598.slice - libcontainer container kubepods-burstable-podbf9761a59b5892088a6bdc70eef89598.slice. Dec 16 12:45:48.533835 kubelet[3203]: E1216 12:45:48.533784 3203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.537346 systemd[1]: Created slice kubepods-burstable-pod5c66cf11ac45fcd578d73298eeaa3234.slice - libcontainer container kubepods-burstable-pod5c66cf11ac45fcd578d73298eeaa3234.slice. Dec 16 12:45:48.550901 kubelet[3203]: I1216 12:45:48.550865 3203 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.551573 kubelet[3203]: E1216 12:45:48.551376 3203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.49:6443/api/v1/nodes\": dial tcp 10.200.20.49:6443: connect: connection refused" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.552433 kubelet[3203]: E1216 12:45:48.552406 3203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.555857 systemd[1]: Created slice kubepods-burstable-pod2ce8bfb89f7ef284d742fe39c81e4b7b.slice - libcontainer container kubepods-burstable-pod2ce8bfb89f7ef284d742fe39c81e4b7b.slice. Dec 16 12:45:48.557666 kubelet[3203]: E1216 12:45:48.557610 3203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.576834 kubelet[3203]: E1216 12:45:48.576714 3203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-a4975b77c5?timeout=10s\": dial tcp 10.200.20.49:6443: connect: connection refused" interval="400ms" Dec 16 12:45:48.677058 kubelet[3203]: I1216 12:45:48.677008 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf9761a59b5892088a6bdc70eef89598-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" (UID: \"bf9761a59b5892088a6bdc70eef89598\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.677058 kubelet[3203]: I1216 12:45:48.677048 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.677058 kubelet[3203]: I1216 12:45:48.677066 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.677269 kubelet[3203]: I1216 12:45:48.677091 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.677269 kubelet[3203]: I1216 12:45:48.677101 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.677269 kubelet[3203]: I1216 12:45:48.677110 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.677269 kubelet[3203]: I1216 12:45:48.677119 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ce8bfb89f7ef284d742fe39c81e4b7b-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-a4975b77c5\" (UID: \"2ce8bfb89f7ef284d742fe39c81e4b7b\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.677269 kubelet[3203]: I1216 12:45:48.677130 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf9761a59b5892088a6bdc70eef89598-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" (UID: \"bf9761a59b5892088a6bdc70eef89598\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.677353 kubelet[3203]: I1216 12:45:48.677140 3203 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf9761a59b5892088a6bdc70eef89598-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" (UID: \"bf9761a59b5892088a6bdc70eef89598\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.753651 kubelet[3203]: I1216 12:45:48.753599 3203 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.753983 kubelet[3203]: E1216 12:45:48.753960 3203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.49:6443/api/v1/nodes\": dial tcp 10.200.20.49:6443: connect: connection refused" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:48.835755 containerd[2141]: time="2025-12-16T12:45:48.835655998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-a4975b77c5,Uid:bf9761a59b5892088a6bdc70eef89598,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:48.853817 containerd[2141]: time="2025-12-16T12:45:48.853770962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-a4975b77c5,Uid:5c66cf11ac45fcd578d73298eeaa3234,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:48.858957 containerd[2141]: time="2025-12-16T12:45:48.858881622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-a4975b77c5,Uid:2ce8bfb89f7ef284d742fe39c81e4b7b,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:48.977792 kubelet[3203]: E1216 12:45:48.977745 3203 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-a-a4975b77c5?timeout=10s\": dial tcp 10.200.20.49:6443: connect: connection refused" interval="800ms" Dec 16 12:45:48.994110 containerd[2141]: time="2025-12-16T12:45:48.993575995Z" level=info msg="connecting to shim 97586c9a957c9dfcb8cabaff3a5873c01f73497dfe23efb1b1ddfd71fa745e54" address="unix:///run/containerd/s/4f38771df04572035d36c377fb9da0981665280fba46b03858dcddf02a7e2e36" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:49.021392 systemd[1]: Started cri-containerd-97586c9a957c9dfcb8cabaff3a5873c01f73497dfe23efb1b1ddfd71fa745e54.scope - libcontainer container 97586c9a957c9dfcb8cabaff3a5873c01f73497dfe23efb1b1ddfd71fa745e54. Dec 16 12:45:49.076000 audit: BPF prog-id=107 op=LOAD Dec 16 12:45:49.077000 audit: BPF prog-id=108 op=LOAD Dec 16 12:45:49.077000 audit[3255]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3244 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353836633961393537633964666362386361626166663361353837 Dec 16 12:45:49.077000 audit: BPF prog-id=108 op=UNLOAD Dec 16 12:45:49.077000 audit[3255]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3244 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353836633961393537633964666362386361626166663361353837 Dec 16 12:45:49.077000 audit: BPF prog-id=109 op=LOAD Dec 16 12:45:49.077000 audit[3255]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3244 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353836633961393537633964666362386361626166663361353837 Dec 16 12:45:49.077000 audit: BPF prog-id=110 op=LOAD Dec 16 12:45:49.077000 audit[3255]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3244 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353836633961393537633964666362386361626166663361353837 Dec 16 12:45:49.077000 audit: BPF prog-id=110 op=UNLOAD Dec 16 12:45:49.077000 audit[3255]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3244 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353836633961393537633964666362386361626166663361353837 Dec 16 12:45:49.077000 audit: BPF prog-id=109 op=UNLOAD Dec 16 12:45:49.077000 audit[3255]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3244 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353836633961393537633964666362386361626166663361353837 Dec 16 12:45:49.077000 audit: BPF prog-id=111 op=LOAD Dec 16 12:45:49.077000 audit[3255]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3244 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937353836633961393537633964666362386361626166663361353837 Dec 16 12:45:49.106298 containerd[2141]: time="2025-12-16T12:45:49.105838463Z" level=info msg="connecting to shim fca43c4b12b7e7c2490ab8015a9827c842fa9e63e76f22a50b3decd6d83946bf" address="unix:///run/containerd/s/b6cf7f13b74326e35083d9a373fb8bfffbe5761f2fd46cdc6e8cce2aed687a4b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:49.120797 containerd[2141]: time="2025-12-16T12:45:49.120659306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-a-a4975b77c5,Uid:bf9761a59b5892088a6bdc70eef89598,Namespace:kube-system,Attempt:0,} returns sandbox id \"97586c9a957c9dfcb8cabaff3a5873c01f73497dfe23efb1b1ddfd71fa745e54\"" Dec 16 12:45:49.124599 containerd[2141]: time="2025-12-16T12:45:49.124556962Z" level=info msg="CreateContainer within sandbox \"97586c9a957c9dfcb8cabaff3a5873c01f73497dfe23efb1b1ddfd71fa745e54\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:45:49.138305 systemd[1]: Started cri-containerd-fca43c4b12b7e7c2490ab8015a9827c842fa9e63e76f22a50b3decd6d83946bf.scope - libcontainer container fca43c4b12b7e7c2490ab8015a9827c842fa9e63e76f22a50b3decd6d83946bf. Dec 16 12:45:49.146000 audit: BPF prog-id=112 op=LOAD Dec 16 12:45:49.146000 audit: BPF prog-id=113 op=LOAD Dec 16 12:45:49.146000 audit[3302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3288 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663613433633462313262376537633234393061623830313561393832 Dec 16 12:45:49.146000 audit: BPF prog-id=113 op=UNLOAD Dec 16 12:45:49.146000 audit[3302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663613433633462313262376537633234393061623830313561393832 Dec 16 12:45:49.147000 audit: BPF prog-id=114 op=LOAD Dec 16 12:45:49.147000 audit[3302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3288 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663613433633462313262376537633234393061623830313561393832 Dec 16 12:45:49.147000 audit: BPF prog-id=115 op=LOAD Dec 16 12:45:49.147000 audit[3302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3288 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663613433633462313262376537633234393061623830313561393832 Dec 16 12:45:49.147000 audit: BPF prog-id=115 op=UNLOAD Dec 16 12:45:49.147000 audit[3302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663613433633462313262376537633234393061623830313561393832 Dec 16 12:45:49.147000 audit: BPF prog-id=114 op=UNLOAD Dec 16 12:45:49.147000 audit[3302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663613433633462313262376537633234393061623830313561393832 Dec 16 12:45:49.147000 audit: BPF prog-id=116 op=LOAD Dec 16 12:45:49.147000 audit[3302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3288 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663613433633462313262376537633234393061623830313561393832 Dec 16 12:45:49.157185 kubelet[3203]: I1216 12:45:49.157148 3203 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:49.158061 kubelet[3203]: E1216 12:45:49.158032 3203 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.49:6443/api/v1/nodes\": dial tcp 10.200.20.49:6443: connect: connection refused" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:49.218211 kubelet[3203]: W1216 12:45:49.218161 3203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.49:6443: connect: connection refused Dec 16 12:45:49.218460 kubelet[3203]: E1216 12:45:49.218308 3203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.49:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:45:49.282667 containerd[2141]: time="2025-12-16T12:45:49.282564812Z" level=info msg="connecting to shim 3d319a3561ee8dc6676d20087c9e3817fd9d01bd406fecdf482e8cdef01e5acc" address="unix:///run/containerd/s/5cc54e16a4ceab3aa3e2a4cc2f8d08bdf1245cb2b0c99b35b98e1ac1fe322881" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:49.307324 systemd[1]: Started cri-containerd-3d319a3561ee8dc6676d20087c9e3817fd9d01bd406fecdf482e8cdef01e5acc.scope - libcontainer container 3d319a3561ee8dc6676d20087c9e3817fd9d01bd406fecdf482e8cdef01e5acc. Dec 16 12:45:49.315000 audit: BPF prog-id=117 op=LOAD Dec 16 12:45:49.315000 audit: BPF prog-id=118 op=LOAD Dec 16 12:45:49.315000 audit[3347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3337 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364333139613335363165653864633636373664323030383763396533 Dec 16 12:45:49.315000 audit: BPF prog-id=118 op=UNLOAD Dec 16 12:45:49.315000 audit[3347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364333139613335363165653864633636373664323030383763396533 Dec 16 12:45:49.315000 audit: BPF prog-id=119 op=LOAD Dec 16 12:45:49.315000 audit[3347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3337 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364333139613335363165653864633636373664323030383763396533 Dec 16 12:45:49.315000 audit: BPF prog-id=120 op=LOAD Dec 16 12:45:49.315000 audit[3347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3337 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364333139613335363165653864633636373664323030383763396533 Dec 16 12:45:49.315000 audit: BPF prog-id=120 op=UNLOAD Dec 16 12:45:49.315000 audit[3347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364333139613335363165653864633636373664323030383763396533 Dec 16 12:45:49.315000 audit: BPF prog-id=119 op=UNLOAD Dec 16 12:45:49.315000 audit[3347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364333139613335363165653864633636373664323030383763396533 Dec 16 12:45:49.315000 audit: BPF prog-id=121 op=LOAD Dec 16 12:45:49.315000 audit[3347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3337 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364333139613335363165653864633636373664323030383763396533 Dec 16 12:45:49.332023 containerd[2141]: time="2025-12-16T12:45:49.331811791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-a-a4975b77c5,Uid:5c66cf11ac45fcd578d73298eeaa3234,Namespace:kube-system,Attempt:0,} returns sandbox id \"fca43c4b12b7e7c2490ab8015a9827c842fa9e63e76f22a50b3decd6d83946bf\"" Dec 16 12:45:49.342518 containerd[2141]: time="2025-12-16T12:45:49.342451163Z" level=info msg="CreateContainer within sandbox \"fca43c4b12b7e7c2490ab8015a9827c842fa9e63e76f22a50b3decd6d83946bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:45:49.345531 containerd[2141]: time="2025-12-16T12:45:49.345436870Z" level=info msg="Container 83c12490fc1bdd1f6cc20301ea19666d6a827426d4fd786bdfefd9b06c3fcf53: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:49.377190 containerd[2141]: time="2025-12-16T12:45:49.377060237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-a-a4975b77c5,Uid:2ce8bfb89f7ef284d742fe39c81e4b7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d319a3561ee8dc6676d20087c9e3817fd9d01bd406fecdf482e8cdef01e5acc\"" Dec 16 12:45:49.380753 containerd[2141]: time="2025-12-16T12:45:49.380715937Z" level=info msg="CreateContainer within sandbox \"3d319a3561ee8dc6676d20087c9e3817fd9d01bd406fecdf482e8cdef01e5acc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:45:49.469854 kubelet[3203]: W1216 12:45:49.469793 3203 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-a4975b77c5&limit=500&resourceVersion=0": dial tcp 10.200.20.49:6443: connect: connection refused Dec 16 12:45:49.469854 kubelet[3203]: E1216 12:45:49.469860 3203 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-a-a4975b77c5&limit=500&resourceVersion=0\": dial tcp 10.200.20.49:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:45:49.473504 containerd[2141]: time="2025-12-16T12:45:49.473433618Z" level=info msg="CreateContainer within sandbox \"97586c9a957c9dfcb8cabaff3a5873c01f73497dfe23efb1b1ddfd71fa745e54\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"83c12490fc1bdd1f6cc20301ea19666d6a827426d4fd786bdfefd9b06c3fcf53\"" Dec 16 12:45:49.474661 containerd[2141]: time="2025-12-16T12:45:49.474287540Z" level=info msg="StartContainer for \"83c12490fc1bdd1f6cc20301ea19666d6a827426d4fd786bdfefd9b06c3fcf53\"" Dec 16 12:45:49.476316 containerd[2141]: time="2025-12-16T12:45:49.476257317Z" level=info msg="connecting to shim 83c12490fc1bdd1f6cc20301ea19666d6a827426d4fd786bdfefd9b06c3fcf53" address="unix:///run/containerd/s/4f38771df04572035d36c377fb9da0981665280fba46b03858dcddf02a7e2e36" protocol=ttrpc version=3 Dec 16 12:45:49.491170 containerd[2141]: time="2025-12-16T12:45:49.491112097Z" level=info msg="Container dc9ad67384544165a194f670ae75134405f391ee1ec6306862606e2bd59cf7ab: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:49.494495 systemd[1]: Started cri-containerd-83c12490fc1bdd1f6cc20301ea19666d6a827426d4fd786bdfefd9b06c3fcf53.scope - libcontainer container 83c12490fc1bdd1f6cc20301ea19666d6a827426d4fd786bdfefd9b06c3fcf53. Dec 16 12:45:49.501410 containerd[2141]: time="2025-12-16T12:45:49.501365378Z" level=info msg="Container 4e3101c7aaad1ee21b887396d68574328bee639827ca34dd78f5086bfaa85db0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:45:49.505000 audit: BPF prog-id=122 op=LOAD Dec 16 12:45:49.506000 audit: BPF prog-id=123 op=LOAD Dec 16 12:45:49.506000 audit[3378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017a180 a2=98 a3=0 items=0 ppid=3244 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633132343930666331626464316636636332303330316561313936 Dec 16 12:45:49.506000 audit: BPF prog-id=123 op=UNLOAD Dec 16 12:45:49.506000 audit[3378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3244 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633132343930666331626464316636636332303330316561313936 Dec 16 12:45:49.506000 audit: BPF prog-id=124 op=LOAD Dec 16 12:45:49.506000 audit[3378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017a3e8 a2=98 a3=0 items=0 ppid=3244 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633132343930666331626464316636636332303330316561313936 Dec 16 12:45:49.506000 audit: BPF prog-id=125 op=LOAD Dec 16 12:45:49.506000 audit[3378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017a168 a2=98 a3=0 items=0 ppid=3244 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633132343930666331626464316636636332303330316561313936 Dec 16 12:45:49.506000 audit: BPF prog-id=125 op=UNLOAD Dec 16 12:45:49.506000 audit[3378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3244 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633132343930666331626464316636636332303330316561313936 Dec 16 12:45:49.506000 audit: BPF prog-id=124 op=UNLOAD Dec 16 12:45:49.506000 audit[3378]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3244 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633132343930666331626464316636636332303330316561313936 Dec 16 12:45:49.506000 audit: BPF prog-id=126 op=LOAD Dec 16 12:45:49.506000 audit[3378]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017a648 a2=98 a3=0 items=0 ppid=3244 pid=3378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833633132343930666331626464316636636332303330316561313936 Dec 16 12:45:49.636803 containerd[2141]: time="2025-12-16T12:45:49.635601209Z" level=info msg="CreateContainer within sandbox \"fca43c4b12b7e7c2490ab8015a9827c842fa9e63e76f22a50b3decd6d83946bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc9ad67384544165a194f670ae75134405f391ee1ec6306862606e2bd59cf7ab\"" Dec 16 12:45:49.638403 containerd[2141]: time="2025-12-16T12:45:49.638367217Z" level=info msg="StartContainer for \"83c12490fc1bdd1f6cc20301ea19666d6a827426d4fd786bdfefd9b06c3fcf53\" returns successfully" Dec 16 12:45:49.638685 containerd[2141]: time="2025-12-16T12:45:49.638665272Z" level=info msg="StartContainer for \"dc9ad67384544165a194f670ae75134405f391ee1ec6306862606e2bd59cf7ab\"" Dec 16 12:45:49.640814 containerd[2141]: time="2025-12-16T12:45:49.640719253Z" level=info msg="connecting to shim dc9ad67384544165a194f670ae75134405f391ee1ec6306862606e2bd59cf7ab" address="unix:///run/containerd/s/b6cf7f13b74326e35083d9a373fb8bfffbe5761f2fd46cdc6e8cce2aed687a4b" protocol=ttrpc version=3 Dec 16 12:45:49.641573 containerd[2141]: time="2025-12-16T12:45:49.641383118Z" level=info msg="CreateContainer within sandbox \"3d319a3561ee8dc6676d20087c9e3817fd9d01bd406fecdf482e8cdef01e5acc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e3101c7aaad1ee21b887396d68574328bee639827ca34dd78f5086bfaa85db0\"" Dec 16 12:45:49.642597 containerd[2141]: time="2025-12-16T12:45:49.642547407Z" level=info msg="StartContainer for \"4e3101c7aaad1ee21b887396d68574328bee639827ca34dd78f5086bfaa85db0\"" Dec 16 12:45:49.645774 containerd[2141]: time="2025-12-16T12:45:49.645737948Z" level=info msg="connecting to shim 4e3101c7aaad1ee21b887396d68574328bee639827ca34dd78f5086bfaa85db0" address="unix:///run/containerd/s/5cc54e16a4ceab3aa3e2a4cc2f8d08bdf1245cb2b0c99b35b98e1ac1fe322881" protocol=ttrpc version=3 Dec 16 12:45:49.679619 systemd[1]: Started cri-containerd-4e3101c7aaad1ee21b887396d68574328bee639827ca34dd78f5086bfaa85db0.scope - libcontainer container 4e3101c7aaad1ee21b887396d68574328bee639827ca34dd78f5086bfaa85db0. Dec 16 12:45:49.680483 systemd[1]: Started cri-containerd-dc9ad67384544165a194f670ae75134405f391ee1ec6306862606e2bd59cf7ab.scope - libcontainer container dc9ad67384544165a194f670ae75134405f391ee1ec6306862606e2bd59cf7ab. Dec 16 12:45:49.699000 audit: BPF prog-id=127 op=LOAD Dec 16 12:45:49.700000 audit: BPF prog-id=128 op=LOAD Dec 16 12:45:49.700000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3288 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463396164363733383435343431363561313934663637306165373531 Dec 16 12:45:49.700000 audit: BPF prog-id=128 op=UNLOAD Dec 16 12:45:49.700000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463396164363733383435343431363561313934663637306165373531 Dec 16 12:45:49.701000 audit: BPF prog-id=129 op=LOAD Dec 16 12:45:49.701000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3288 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463396164363733383435343431363561313934663637306165373531 Dec 16 12:45:49.702000 audit: BPF prog-id=130 op=LOAD Dec 16 12:45:49.702000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3288 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463396164363733383435343431363561313934663637306165373531 Dec 16 12:45:49.702000 audit: BPF prog-id=130 op=UNLOAD Dec 16 12:45:49.702000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463396164363733383435343431363561313934663637306165373531 Dec 16 12:45:49.702000 audit: BPF prog-id=129 op=UNLOAD Dec 16 12:45:49.702000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3288 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463396164363733383435343431363561313934663637306165373531 Dec 16 12:45:49.702000 audit: BPF prog-id=131 op=LOAD Dec 16 12:45:49.702000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3288 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463396164363733383435343431363561313934663637306165373531 Dec 16 12:45:49.703000 audit: BPF prog-id=132 op=LOAD Dec 16 12:45:49.704000 audit: BPF prog-id=133 op=LOAD Dec 16 12:45:49.704000 audit[3413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3337 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465333130316337616161643165653231623838373339366436383537 Dec 16 12:45:49.704000 audit: BPF prog-id=133 op=UNLOAD Dec 16 12:45:49.704000 audit[3413]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465333130316337616161643165653231623838373339366436383537 Dec 16 12:45:49.704000 audit: BPF prog-id=134 op=LOAD Dec 16 12:45:49.704000 audit[3413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3337 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465333130316337616161643165653231623838373339366436383537 Dec 16 12:45:49.704000 audit: BPF prog-id=135 op=LOAD Dec 16 12:45:49.704000 audit[3413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3337 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465333130316337616161643165653231623838373339366436383537 Dec 16 12:45:49.704000 audit: BPF prog-id=135 op=UNLOAD Dec 16 12:45:49.704000 audit[3413]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465333130316337616161643165653231623838373339366436383537 Dec 16 12:45:49.704000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:45:49.704000 audit[3413]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3337 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465333130316337616161643165653231623838373339366436383537 Dec 16 12:45:49.704000 audit: BPF prog-id=136 op=LOAD Dec 16 12:45:49.704000 audit[3413]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3337 pid=3413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:49.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465333130316337616161643165653231623838373339366436383537 Dec 16 12:45:49.960846 kubelet[3203]: I1216 12:45:49.960737 3203 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:50.588047 containerd[2141]: time="2025-12-16T12:45:50.587980827Z" level=info msg="StartContainer for \"dc9ad67384544165a194f670ae75134405f391ee1ec6306862606e2bd59cf7ab\" returns successfully" Dec 16 12:45:50.590603 containerd[2141]: time="2025-12-16T12:45:50.590321903Z" level=info msg="StartContainer for \"4e3101c7aaad1ee21b887396d68574328bee639827ca34dd78f5086bfaa85db0\" returns successfully" Dec 16 12:45:50.598965 kubelet[3203]: E1216 12:45:50.598574 3203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:50.600456 kubelet[3203]: E1216 12:45:50.600432 3203 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:50.819094 kubelet[3203]: E1216 12:45:50.819036 3203 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-a-a4975b77c5\" not found" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:50.915205 kubelet[3203]: I1216 12:45:50.914769 3203 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:50.915205 kubelet[3203]: E1216 12:45:50.915031 3203 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-a-a4975b77c5\": node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:50.948190 kubelet[3203]: E1216 12:45:50.948074 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:51.040149 update_engine[2110]: I20251216 12:45:51.039800 2110 update_attempter.cc:509] Updating boot flags... Dec 16 12:45:51.048428 kubelet[3203]: E1216 12:45:51.048385 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:51.149378 kubelet[3203]: E1216 12:45:51.149332 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:51.250456 kubelet[3203]: E1216 12:45:51.250320 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:51.351030 kubelet[3203]: E1216 12:45:51.350984 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:51.452046 kubelet[3203]: E1216 12:45:51.451999 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:51.552723 kubelet[3203]: E1216 12:45:51.552677 3203 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-a-a4975b77c5\" not found" Dec 16 12:45:51.574324 kubelet[3203]: I1216 12:45:51.574276 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.580186 kubelet[3203]: E1216 12:45:51.580149 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.580186 kubelet[3203]: I1216 12:45:51.580179 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.581580 kubelet[3203]: E1216 12:45:51.581545 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-a4975b77c5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.581580 kubelet[3203]: I1216 12:45:51.581568 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.583127 kubelet[3203]: E1216 12:45:51.583097 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.599629 kubelet[3203]: I1216 12:45:51.599571 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.600592 kubelet[3203]: I1216 12:45:51.600202 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.600592 kubelet[3203]: I1216 12:45:51.600348 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.602583 kubelet[3203]: E1216 12:45:51.602558 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.602829 kubelet[3203]: E1216 12:45:51.602621 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-a4975b77c5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:51.603138 kubelet[3203]: E1216 12:45:51.603113 3203 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:52.364568 kubelet[3203]: I1216 12:45:52.364525 3203 apiserver.go:52] "Watching apiserver" Dec 16 12:45:52.376799 kubelet[3203]: I1216 12:45:52.376747 3203 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:45:52.602453 kubelet[3203]: I1216 12:45:52.602349 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:52.602826 kubelet[3203]: I1216 12:45:52.602544 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:52.602826 kubelet[3203]: I1216 12:45:52.602782 3203 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:52.618039 kubelet[3203]: W1216 12:45:52.617646 3203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:45:52.620787 kubelet[3203]: W1216 12:45:52.620742 3203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:45:52.621568 kubelet[3203]: W1216 12:45:52.621548 3203 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:45:52.986216 systemd[1]: Reload requested from client PID 3542 ('systemctl') (unit session-9.scope)... Dec 16 12:45:52.986674 systemd[1]: Reloading... Dec 16 12:45:53.086213 zram_generator::config[3592]: No configuration found. Dec 16 12:45:53.261429 systemd[1]: Reloading finished in 274 ms. Dec 16 12:45:53.300627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:53.312564 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:45:53.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:53.312968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:53.313037 systemd[1]: kubelet.service: Consumed 592ms CPU time, 127.1M memory peak. Dec 16 12:45:53.316406 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:45:53.316479 kernel: audit: type=1131 audit(1765889153.311:423): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:53.318200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:45:53.329000 audit: BPF prog-id=137 op=LOAD Dec 16 12:45:53.336122 kernel: audit: type=1334 audit(1765889153.329:424): prog-id=137 op=LOAD Dec 16 12:45:53.337000 audit: BPF prog-id=104 op=UNLOAD Dec 16 12:45:53.337000 audit: BPF prog-id=138 op=LOAD Dec 16 12:45:53.348413 kernel: audit: type=1334 audit(1765889153.337:425): prog-id=104 op=UNLOAD Dec 16 12:45:53.348489 kernel: audit: type=1334 audit(1765889153.337:426): prog-id=138 op=LOAD Dec 16 12:45:53.337000 audit: BPF prog-id=139 op=LOAD Dec 16 12:45:53.352764 kernel: audit: type=1334 audit(1765889153.337:427): prog-id=139 op=LOAD Dec 16 12:45:53.337000 audit: BPF prog-id=105 op=UNLOAD Dec 16 12:45:53.357131 kernel: audit: type=1334 audit(1765889153.337:428): prog-id=105 op=UNLOAD Dec 16 12:45:53.337000 audit: BPF prog-id=106 op=UNLOAD Dec 16 12:45:53.361316 kernel: audit: type=1334 audit(1765889153.337:429): prog-id=106 op=UNLOAD Dec 16 12:45:53.338000 audit: BPF prog-id=140 op=LOAD Dec 16 12:45:53.365605 kernel: audit: type=1334 audit(1765889153.338:430): prog-id=140 op=LOAD Dec 16 12:45:53.338000 audit: BPF prog-id=141 op=LOAD Dec 16 12:45:53.369791 kernel: audit: type=1334 audit(1765889153.338:431): prog-id=141 op=LOAD Dec 16 12:45:53.338000 audit: BPF prog-id=91 op=UNLOAD Dec 16 12:45:53.374039 kernel: audit: type=1334 audit(1765889153.338:432): prog-id=91 op=UNLOAD Dec 16 12:45:53.338000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:45:53.342000 audit: BPF prog-id=142 op=LOAD Dec 16 12:45:53.342000 audit: BPF prog-id=90 op=UNLOAD Dec 16 12:45:53.346000 audit: BPF prog-id=143 op=LOAD Dec 16 12:45:53.346000 audit: BPF prog-id=96 op=UNLOAD Dec 16 12:45:53.351000 audit: BPF prog-id=144 op=LOAD Dec 16 12:45:53.351000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:45:53.351000 audit: BPF prog-id=145 op=LOAD Dec 16 12:45:53.351000 audit: BPF prog-id=146 op=LOAD Dec 16 12:45:53.351000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:45:53.351000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:45:53.355000 audit: BPF prog-id=147 op=LOAD Dec 16 12:45:53.355000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:45:53.356000 audit: BPF prog-id=148 op=LOAD Dec 16 12:45:53.356000 audit: BPF prog-id=149 op=LOAD Dec 16 12:45:53.356000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:45:53.356000 audit: BPF prog-id=95 op=UNLOAD Dec 16 12:45:53.360000 audit: BPF prog-id=150 op=LOAD Dec 16 12:45:53.360000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:45:53.364000 audit: BPF prog-id=151 op=LOAD Dec 16 12:45:53.364000 audit: BPF prog-id=152 op=LOAD Dec 16 12:45:53.364000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:45:53.364000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:45:53.368000 audit: BPF prog-id=153 op=LOAD Dec 16 12:45:53.368000 audit: BPF prog-id=100 op=UNLOAD Dec 16 12:45:53.368000 audit: BPF prog-id=154 op=LOAD Dec 16 12:45:53.368000 audit: BPF prog-id=155 op=LOAD Dec 16 12:45:53.368000 audit: BPF prog-id=101 op=UNLOAD Dec 16 12:45:53.368000 audit: BPF prog-id=102 op=UNLOAD Dec 16 12:45:53.372000 audit: BPF prog-id=156 op=LOAD Dec 16 12:45:53.372000 audit: BPF prog-id=103 op=UNLOAD Dec 16 12:45:53.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:45:53.478504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:45:53.487905 (kubelet)[3656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:45:53.519311 kubelet[3656]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:53.519788 kubelet[3656]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:45:53.519788 kubelet[3656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:45:53.519883 kubelet[3656]: I1216 12:45:53.519843 3656 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:45:53.525120 kubelet[3656]: I1216 12:45:53.524476 3656 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:45:53.525120 kubelet[3656]: I1216 12:45:53.524508 3656 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:45:53.525120 kubelet[3656]: I1216 12:45:53.524725 3656 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:45:53.525995 kubelet[3656]: I1216 12:45:53.525965 3656 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:45:53.527794 kubelet[3656]: I1216 12:45:53.527768 3656 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:45:53.531960 kubelet[3656]: I1216 12:45:53.531941 3656 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:45:53.535759 kubelet[3656]: I1216 12:45:53.535729 3656 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:45:53.535950 kubelet[3656]: I1216 12:45:53.535917 3656 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:45:53.536089 kubelet[3656]: I1216 12:45:53.535947 3656 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-a-a4975b77c5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:45:53.536189 kubelet[3656]: I1216 12:45:53.536111 3656 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:45:53.536189 kubelet[3656]: I1216 12:45:53.536118 3656 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:45:53.536189 kubelet[3656]: I1216 12:45:53.536157 3656 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:53.536291 kubelet[3656]: I1216 12:45:53.536273 3656 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:45:53.536291 kubelet[3656]: I1216 12:45:53.536289 3656 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:45:53.536994 kubelet[3656]: I1216 12:45:53.536975 3656 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:45:53.537024 kubelet[3656]: I1216 12:45:53.537003 3656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:45:53.544121 kubelet[3656]: I1216 12:45:53.543151 3656 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:45:53.544121 kubelet[3656]: I1216 12:45:53.543464 3656 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:45:53.544121 kubelet[3656]: I1216 12:45:53.543841 3656 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:45:53.544121 kubelet[3656]: I1216 12:45:53.543869 3656 server.go:1287] "Started kubelet" Dec 16 12:45:53.546152 kubelet[3656]: I1216 12:45:53.546136 3656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:45:53.551372 kubelet[3656]: I1216 12:45:53.551335 3656 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:45:53.551980 kubelet[3656]: I1216 12:45:53.551936 3656 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:45:53.552400 kubelet[3656]: I1216 12:45:53.552375 3656 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:45:53.552993 kubelet[3656]: I1216 12:45:53.552977 3656 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:45:53.553816 kubelet[3656]: I1216 12:45:53.553046 3656 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:45:53.554019 kubelet[3656]: I1216 12:45:53.554008 3656 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:45:53.555309 kubelet[3656]: I1216 12:45:53.555292 3656 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:45:53.556048 kubelet[3656]: I1216 12:45:53.556034 3656 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:45:53.557394 kubelet[3656]: I1216 12:45:53.557378 3656 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:45:53.557600 kubelet[3656]: I1216 12:45:53.557584 3656 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:45:53.559228 kubelet[3656]: E1216 12:45:53.559205 3656 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:45:53.560661 kubelet[3656]: I1216 12:45:53.560644 3656 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:45:53.561162 kubelet[3656]: I1216 12:45:53.561099 3656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:45:53.563218 kubelet[3656]: I1216 12:45:53.563191 3656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:45:53.563218 kubelet[3656]: I1216 12:45:53.563219 3656 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:45:53.563315 kubelet[3656]: I1216 12:45:53.563237 3656 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:45:53.563315 kubelet[3656]: I1216 12:45:53.563243 3656 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:45:53.563315 kubelet[3656]: E1216 12:45:53.563280 3656 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:45:53.612210 kubelet[3656]: I1216 12:45:53.612180 3656 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:45:53.612210 kubelet[3656]: I1216 12:45:53.612197 3656 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:45:53.612210 kubelet[3656]: I1216 12:45:53.612215 3656 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:45:53.612388 kubelet[3656]: I1216 12:45:53.612361 3656 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:45:53.612388 kubelet[3656]: I1216 12:45:53.612368 3656 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:45:53.612388 kubelet[3656]: I1216 12:45:53.612384 3656 policy_none.go:49] "None policy: Start" Dec 16 12:45:53.612429 kubelet[3656]: I1216 12:45:53.612391 3656 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:45:53.612429 kubelet[3656]: I1216 12:45:53.612398 3656 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:45:53.612483 kubelet[3656]: I1216 12:45:53.612468 3656 state_mem.go:75] "Updated machine memory state" Dec 16 12:45:53.618584 kubelet[3656]: I1216 12:45:53.618053 3656 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:45:53.618584 kubelet[3656]: I1216 12:45:53.618252 3656 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:45:53.618584 kubelet[3656]: I1216 12:45:53.618263 3656 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:45:53.618584 kubelet[3656]: I1216 12:45:53.618480 3656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:45:53.622151 kubelet[3656]: E1216 12:45:53.622059 3656 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:45:53.664803 kubelet[3656]: I1216 12:45:53.664766 3656 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.665487 kubelet[3656]: I1216 12:45:53.665149 3656 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.665487 kubelet[3656]: I1216 12:45:53.665366 3656 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.677097 kubelet[3656]: W1216 12:45:53.677045 3656 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:45:53.677264 kubelet[3656]: E1216 12:45:53.677116 3656 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-a-a4975b77c5\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.678224 kubelet[3656]: W1216 12:45:53.678198 3656 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:45:53.678303 kubelet[3656]: E1216 12:45:53.678247 3656 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" already exists" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.678439 kubelet[3656]: W1216 12:45:53.678424 3656 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:45:53.678473 kubelet[3656]: E1216 12:45:53.678455 3656 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.729311 kubelet[3656]: I1216 12:45:53.729222 3656 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.741707 kubelet[3656]: I1216 12:45:53.741675 3656 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.741916 kubelet[3656]: I1216 12:45:53.741764 3656 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756386 kubelet[3656]: I1216 12:45:53.756349 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756386 kubelet[3656]: I1216 12:45:53.756380 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756386 kubelet[3656]: I1216 12:45:53.756394 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756582 kubelet[3656]: I1216 12:45:53.756406 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756582 kubelet[3656]: I1216 12:45:53.756420 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bf9761a59b5892088a6bdc70eef89598-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" (UID: \"bf9761a59b5892088a6bdc70eef89598\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756582 kubelet[3656]: I1216 12:45:53.756430 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bf9761a59b5892088a6bdc70eef89598-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" (UID: \"bf9761a59b5892088a6bdc70eef89598\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756582 kubelet[3656]: I1216 12:45:53.756439 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bf9761a59b5892088a6bdc70eef89598-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" (UID: \"bf9761a59b5892088a6bdc70eef89598\") " pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756582 kubelet[3656]: I1216 12:45:53.756449 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c66cf11ac45fcd578d73298eeaa3234-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-a-a4975b77c5\" (UID: \"5c66cf11ac45fcd578d73298eeaa3234\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:53.756664 kubelet[3656]: I1216 12:45:53.756458 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ce8bfb89f7ef284d742fe39c81e4b7b-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-a-a4975b77c5\" (UID: \"2ce8bfb89f7ef284d742fe39c81e4b7b\") " pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:54.540297 kubelet[3656]: I1216 12:45:54.540248 3656 apiserver.go:52] "Watching apiserver" Dec 16 12:45:54.556150 kubelet[3656]: I1216 12:45:54.556070 3656 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:45:54.596202 kubelet[3656]: I1216 12:45:54.596009 3656 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:54.616909 kubelet[3656]: W1216 12:45:54.616866 3656 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 12:45:54.617067 kubelet[3656]: E1216 12:45:54.616930 3656 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-a-a4975b77c5\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" Dec 16 12:45:54.641991 kubelet[3656]: I1216 12:45:54.641865 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-a-a4975b77c5" podStartSLOduration=2.64184787 podStartE2EDuration="2.64184787s" podCreationTimestamp="2025-12-16 12:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:54.630597217 +0000 UTC m=+1.139250879" watchObservedRunningTime="2025-12-16 12:45:54.64184787 +0000 UTC m=+1.150501532" Dec 16 12:45:54.641991 kubelet[3656]: I1216 12:45:54.641991 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-a-a4975b77c5" podStartSLOduration=2.6419866340000002 podStartE2EDuration="2.641986634s" podCreationTimestamp="2025-12-16 12:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:54.641726106 +0000 UTC m=+1.150379768" watchObservedRunningTime="2025-12-16 12:45:54.641986634 +0000 UTC m=+1.150640296" Dec 16 12:45:54.652778 kubelet[3656]: I1216 12:45:54.652661 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-a-a4975b77c5" podStartSLOduration=2.652643731 podStartE2EDuration="2.652643731s" podCreationTimestamp="2025-12-16 12:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:45:54.652589906 +0000 UTC m=+1.161243576" watchObservedRunningTime="2025-12-16 12:45:54.652643731 +0000 UTC m=+1.161297393" Dec 16 12:45:58.592978 kubelet[3656]: I1216 12:45:58.592934 3656 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:45:58.593668 containerd[2141]: time="2025-12-16T12:45:58.593575372Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:45:58.593915 kubelet[3656]: I1216 12:45:58.593757 3656 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:45:59.491467 kubelet[3656]: I1216 12:45:59.491420 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd55fcff-77a3-48a1-a1df-82c14cfdb268-xtables-lock\") pod \"kube-proxy-lfq99\" (UID: \"fd55fcff-77a3-48a1-a1df-82c14cfdb268\") " pod="kube-system/kube-proxy-lfq99" Dec 16 12:45:59.491721 kubelet[3656]: I1216 12:45:59.491624 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd55fcff-77a3-48a1-a1df-82c14cfdb268-kube-proxy\") pod \"kube-proxy-lfq99\" (UID: \"fd55fcff-77a3-48a1-a1df-82c14cfdb268\") " pod="kube-system/kube-proxy-lfq99" Dec 16 12:45:59.491721 kubelet[3656]: I1216 12:45:59.491654 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh2nq\" (UniqueName: \"kubernetes.io/projected/fd55fcff-77a3-48a1-a1df-82c14cfdb268-kube-api-access-bh2nq\") pod \"kube-proxy-lfq99\" (UID: \"fd55fcff-77a3-48a1-a1df-82c14cfdb268\") " pod="kube-system/kube-proxy-lfq99" Dec 16 12:45:59.491721 kubelet[3656]: I1216 12:45:59.491672 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd55fcff-77a3-48a1-a1df-82c14cfdb268-lib-modules\") pod \"kube-proxy-lfq99\" (UID: \"fd55fcff-77a3-48a1-a1df-82c14cfdb268\") " pod="kube-system/kube-proxy-lfq99" Dec 16 12:45:59.493755 systemd[1]: Created slice kubepods-besteffort-podfd55fcff_77a3_48a1_a1df_82c14cfdb268.slice - libcontainer container kubepods-besteffort-podfd55fcff_77a3_48a1_a1df_82c14cfdb268.slice. Dec 16 12:45:59.668094 systemd[1]: Created slice kubepods-besteffort-pod6f379358_b66f_4540_8378_698ba8e98017.slice - libcontainer container kubepods-besteffort-pod6f379358_b66f_4540_8378_698ba8e98017.slice. Dec 16 12:45:59.693729 kubelet[3656]: I1216 12:45:59.693682 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6f379358-b66f-4540-8378-698ba8e98017-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dq4bn\" (UID: \"6f379358-b66f-4540-8378-698ba8e98017\") " pod="tigera-operator/tigera-operator-7dcd859c48-dq4bn" Dec 16 12:45:59.693729 kubelet[3656]: I1216 12:45:59.693734 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4j68\" (UniqueName: \"kubernetes.io/projected/6f379358-b66f-4540-8378-698ba8e98017-kube-api-access-n4j68\") pod \"tigera-operator-7dcd859c48-dq4bn\" (UID: \"6f379358-b66f-4540-8378-698ba8e98017\") " pod="tigera-operator/tigera-operator-7dcd859c48-dq4bn" Dec 16 12:45:59.804179 containerd[2141]: time="2025-12-16T12:45:59.804140494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfq99,Uid:fd55fcff-77a3-48a1-a1df-82c14cfdb268,Namespace:kube-system,Attempt:0,}" Dec 16 12:45:59.840873 containerd[2141]: time="2025-12-16T12:45:59.840823502Z" level=info msg="connecting to shim d3901fd54f7f75f76cb381b117a78022f0bb4344300c080cdaac2d13871ddf47" address="unix:///run/containerd/s/bb89f975958a1dd41365d86023504610c0bd12827b82e6d15ff32ef250682b00" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:45:59.867351 systemd[1]: Started cri-containerd-d3901fd54f7f75f76cb381b117a78022f0bb4344300c080cdaac2d13871ddf47.scope - libcontainer container d3901fd54f7f75f76cb381b117a78022f0bb4344300c080cdaac2d13871ddf47. Dec 16 12:45:59.874000 audit: BPF prog-id=157 op=LOAD Dec 16 12:45:59.879311 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:45:59.879371 kernel: audit: type=1334 audit(1765889159.874:465): prog-id=157 op=LOAD Dec 16 12:45:59.887011 kernel: audit: type=1334 audit(1765889159.882:466): prog-id=158 op=LOAD Dec 16 12:45:59.882000 audit: BPF prog-id=158 op=LOAD Dec 16 12:45:59.882000 audit[3719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.902128 kernel: audit: type=1300 audit(1765889159.882:466): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.918263 kernel: audit: type=1327 audit(1765889159.882:466): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.922977 kernel: audit: type=1334 audit(1765889159.882:467): prog-id=158 op=UNLOAD Dec 16 12:45:59.882000 audit: BPF prog-id=158 op=UNLOAD Dec 16 12:45:59.882000 audit[3719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.939243 kernel: audit: type=1300 audit(1765889159.882:467): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.956224 kernel: audit: type=1327 audit(1765889159.882:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.882000 audit: BPF prog-id=159 op=LOAD Dec 16 12:45:59.961005 kernel: audit: type=1334 audit(1765889159.882:468): prog-id=159 op=LOAD Dec 16 12:45:59.882000 audit[3719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.977445 kernel: audit: type=1300 audit(1765889159.882:468): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.979282 containerd[2141]: time="2025-12-16T12:45:59.979239979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dq4bn,Uid:6f379358-b66f-4540-8378-698ba8e98017,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:45:59.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.995541 kernel: audit: type=1327 audit(1765889159.882:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.901000 audit: BPF prog-id=160 op=LOAD Dec 16 12:45:59.901000 audit[3719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.901000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:45:59.901000 audit[3719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.901000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:45:59.901000 audit[3719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:45:59.901000 audit: BPF prog-id=161 op=LOAD Dec 16 12:45:59.901000 audit[3719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3708 pid=3719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:45:59.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433393031666435346637663735663736636233383162313137613738 Dec 16 12:46:00.002566 containerd[2141]: time="2025-12-16T12:46:00.002514294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lfq99,Uid:fd55fcff-77a3-48a1-a1df-82c14cfdb268,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3901fd54f7f75f76cb381b117a78022f0bb4344300c080cdaac2d13871ddf47\"" Dec 16 12:46:00.007110 containerd[2141]: time="2025-12-16T12:46:00.006979394Z" level=info msg="CreateContainer within sandbox \"d3901fd54f7f75f76cb381b117a78022f0bb4344300c080cdaac2d13871ddf47\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:46:00.043004 containerd[2141]: time="2025-12-16T12:46:00.042954523Z" level=info msg="connecting to shim 76342e18df160c5f25d3c3303d404c5a88b86be46e150a508917b982dce0a567" address="unix:///run/containerd/s/fc60135b876cfcdfcaf17b26895f58c5e8397e6a58f6676b53554d606011105b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:00.046602 containerd[2141]: time="2025-12-16T12:46:00.046561195Z" level=info msg="Container 8b82046300852d0ced5454944d244de219f1559cf9c877830dc53f8050534253: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:00.065242 containerd[2141]: time="2025-12-16T12:46:00.064670923Z" level=info msg="CreateContainer within sandbox \"d3901fd54f7f75f76cb381b117a78022f0bb4344300c080cdaac2d13871ddf47\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b82046300852d0ced5454944d244de219f1559cf9c877830dc53f8050534253\"" Dec 16 12:46:00.065664 containerd[2141]: time="2025-12-16T12:46:00.065483430Z" level=info msg="StartContainer for \"8b82046300852d0ced5454944d244de219f1559cf9c877830dc53f8050534253\"" Dec 16 12:46:00.067828 containerd[2141]: time="2025-12-16T12:46:00.067306098Z" level=info msg="connecting to shim 8b82046300852d0ced5454944d244de219f1559cf9c877830dc53f8050534253" address="unix:///run/containerd/s/bb89f975958a1dd41365d86023504610c0bd12827b82e6d15ff32ef250682b00" protocol=ttrpc version=3 Dec 16 12:46:00.067531 systemd[1]: Started cri-containerd-76342e18df160c5f25d3c3303d404c5a88b86be46e150a508917b982dce0a567.scope - libcontainer container 76342e18df160c5f25d3c3303d404c5a88b86be46e150a508917b982dce0a567. Dec 16 12:46:00.099320 systemd[1]: Started cri-containerd-8b82046300852d0ced5454944d244de219f1559cf9c877830dc53f8050534253.scope - libcontainer container 8b82046300852d0ced5454944d244de219f1559cf9c877830dc53f8050534253. Dec 16 12:46:00.100000 audit: BPF prog-id=162 op=LOAD Dec 16 12:46:00.101000 audit: BPF prog-id=163 op=LOAD Dec 16 12:46:00.101000 audit[3763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3751 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736333432653138646631363063356632356433633333303364343034 Dec 16 12:46:00.101000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:46:00.101000 audit[3763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3751 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736333432653138646631363063356632356433633333303364343034 Dec 16 12:46:00.101000 audit: BPF prog-id=164 op=LOAD Dec 16 12:46:00.101000 audit[3763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3751 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736333432653138646631363063356632356433633333303364343034 Dec 16 12:46:00.102000 audit: BPF prog-id=165 op=LOAD Dec 16 12:46:00.102000 audit[3763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3751 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736333432653138646631363063356632356433633333303364343034 Dec 16 12:46:00.102000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:46:00.102000 audit[3763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3751 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736333432653138646631363063356632356433633333303364343034 Dec 16 12:46:00.102000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:46:00.102000 audit[3763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3751 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736333432653138646631363063356632356433633333303364343034 Dec 16 12:46:00.102000 audit: BPF prog-id=166 op=LOAD Dec 16 12:46:00.102000 audit[3763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3751 pid=3763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736333432653138646631363063356632356433633333303364343034 Dec 16 12:46:00.128706 containerd[2141]: time="2025-12-16T12:46:00.128657892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dq4bn,Uid:6f379358-b66f-4540-8378-698ba8e98017,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"76342e18df160c5f25d3c3303d404c5a88b86be46e150a508917b982dce0a567\"" Dec 16 12:46:00.131671 containerd[2141]: time="2025-12-16T12:46:00.131638871Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:46:00.136000 audit: BPF prog-id=167 op=LOAD Dec 16 12:46:00.136000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3708 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383230343633303038353264306365643534353439343464323434 Dec 16 12:46:00.136000 audit: BPF prog-id=168 op=LOAD Dec 16 12:46:00.136000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3708 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383230343633303038353264306365643534353439343464323434 Dec 16 12:46:00.136000 audit: BPF prog-id=168 op=UNLOAD Dec 16 12:46:00.136000 audit[3775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3708 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383230343633303038353264306365643534353439343464323434 Dec 16 12:46:00.136000 audit: BPF prog-id=167 op=UNLOAD Dec 16 12:46:00.136000 audit[3775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3708 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383230343633303038353264306365643534353439343464323434 Dec 16 12:46:00.136000 audit: BPF prog-id=169 op=LOAD Dec 16 12:46:00.136000 audit[3775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3708 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.136000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383230343633303038353264306365643534353439343464323434 Dec 16 12:46:00.155984 containerd[2141]: time="2025-12-16T12:46:00.155927292Z" level=info msg="StartContainer for \"8b82046300852d0ced5454944d244de219f1559cf9c877830dc53f8050534253\" returns successfully" Dec 16 12:46:00.242000 audit[3851]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3851 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.242000 audit[3851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd76e3c10 a2=0 a3=1 items=0 ppid=3794 pid=3851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:46:00.243000 audit[3850]: NETFILTER_CFG table=mangle:58 family=2 entries=1 op=nft_register_chain pid=3850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.243000 audit[3850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe899f030 a2=0 a3=1 items=0 ppid=3794 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:46:00.248000 audit[3855]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=3855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.248000 audit[3855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffda0fc20 a2=0 a3=1 items=0 ppid=3794 pid=3855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.248000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:46:00.249000 audit[3854]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=3854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.249000 audit[3854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3e3fa60 a2=0 a3=1 items=0 ppid=3794 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:46:00.252000 audit[3856]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.252000 audit[3856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9e4fc10 a2=0 a3=1 items=0 ppid=3794 pid=3856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:46:00.253000 audit[3857]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=3857 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.253000 audit[3857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff0f71db0 a2=0 a3=1 items=0 ppid=3794 pid=3857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:46:00.346000 audit[3858]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.346000 audit[3858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd93a5300 a2=0 a3=1 items=0 ppid=3794 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:46:00.349000 audit[3860]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.349000 audit[3860]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcc380d90 a2=0 a3=1 items=0 ppid=3794 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:46:00.353000 audit[3863]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.353000 audit[3863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd6f656e0 a2=0 a3=1 items=0 ppid=3794 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:46:00.356000 audit[3864]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.356000 audit[3864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9a7c680 a2=0 a3=1 items=0 ppid=3794 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:46:00.358000 audit[3866]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.358000 audit[3866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffffea9150 a2=0 a3=1 items=0 ppid=3794 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:46:00.360000 audit[3867]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.360000 audit[3867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0fd6db0 a2=0 a3=1 items=0 ppid=3794 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:46:00.362000 audit[3869]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.362000 audit[3869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffda6cb10 a2=0 a3=1 items=0 ppid=3794 pid=3869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:46:00.365000 audit[3872]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.365000 audit[3872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffed10050 a2=0 a3=1 items=0 ppid=3794 pid=3872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:46:00.366000 audit[3873]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=3873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.366000 audit[3873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb9c6a60 a2=0 a3=1 items=0 ppid=3794 pid=3873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:46:00.369000 audit[3875]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.369000 audit[3875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc9253d40 a2=0 a3=1 items=0 ppid=3794 pid=3875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.369000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:46:00.370000 audit[3876]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.370000 audit[3876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff526dc60 a2=0 a3=1 items=0 ppid=3794 pid=3876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:46:00.372000 audit[3878]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.372000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff63a88b0 a2=0 a3=1 items=0 ppid=3794 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.372000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:46:00.375000 audit[3881]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=3881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.375000 audit[3881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff049a9c0 a2=0 a3=1 items=0 ppid=3794 pid=3881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:46:00.378000 audit[3884]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.378000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff173e490 a2=0 a3=1 items=0 ppid=3794 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:46:00.379000 audit[3885]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.379000 audit[3885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffac9f910 a2=0 a3=1 items=0 ppid=3794 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:46:00.381000 audit[3887]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.381000 audit[3887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffefd7de40 a2=0 a3=1 items=0 ppid=3794 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:46:00.384000 audit[3890]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.384000 audit[3890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd03a3fe0 a2=0 a3=1 items=0 ppid=3794 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:46:00.385000 audit[3891]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=3891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.385000 audit[3891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc7fda00 a2=0 a3=1 items=0 ppid=3794 pid=3891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.385000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:46:00.387000 audit[3893]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:46:00.387000 audit[3893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd9050220 a2=0 a3=1 items=0 ppid=3794 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.387000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:46:00.445000 audit[3899]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=3899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:00.445000 audit[3899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd1b93b00 a2=0 a3=1 items=0 ppid=3794 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:00.452000 audit[3899]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=3899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:00.452000 audit[3899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd1b93b00 a2=0 a3=1 items=0 ppid=3794 pid=3899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:00.453000 audit[3904]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.453000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffefce14f0 a2=0 a3=1 items=0 ppid=3794 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:46:00.456000 audit[3906]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.456000 audit[3906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd9026d50 a2=0 a3=1 items=0 ppid=3794 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:46:00.461000 audit[3909]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.461000 audit[3909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe1222200 a2=0 a3=1 items=0 ppid=3794 pid=3909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:46:00.462000 audit[3910]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.462000 audit[3910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff07d93c0 a2=0 a3=1 items=0 ppid=3794 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:46:00.465000 audit[3912]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.465000 audit[3912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1196f90 a2=0 a3=1 items=0 ppid=3794 pid=3912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.465000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:46:00.466000 audit[3913]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.466000 audit[3913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa7fd530 a2=0 a3=1 items=0 ppid=3794 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:46:00.468000 audit[3915]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.468000 audit[3915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcf430920 a2=0 a3=1 items=0 ppid=3794 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:46:00.471000 audit[3918]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3918 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.471000 audit[3918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffca519c40 a2=0 a3=1 items=0 ppid=3794 pid=3918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:46:00.472000 audit[3919]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.472000 audit[3919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1c82710 a2=0 a3=1 items=0 ppid=3794 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:46:00.474000 audit[3921]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.474000 audit[3921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd270bf70 a2=0 a3=1 items=0 ppid=3794 pid=3921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:46:00.475000 audit[3922]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.475000 audit[3922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc58827e0 a2=0 a3=1 items=0 ppid=3794 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:46:00.477000 audit[3924]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.477000 audit[3924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe179a8f0 a2=0 a3=1 items=0 ppid=3794 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:46:00.481000 audit[3927]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.481000 audit[3927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff7d52110 a2=0 a3=1 items=0 ppid=3794 pid=3927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:46:00.484000 audit[3930]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.484000 audit[3930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc57b8f60 a2=0 a3=1 items=0 ppid=3794 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:46:00.485000 audit[3931]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.485000 audit[3931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe4724ff0 a2=0 a3=1 items=0 ppid=3794 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:46:00.487000 audit[3933]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=3933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.487000 audit[3933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdd327a50 a2=0 a3=1 items=0 ppid=3794 pid=3933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:46:00.490000 audit[3936]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=3936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.490000 audit[3936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc8e61b70 a2=0 a3=1 items=0 ppid=3794 pid=3936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.490000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:46:00.491000 audit[3937]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=3937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.491000 audit[3937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2412e10 a2=0 a3=1 items=0 ppid=3794 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.491000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:46:00.494000 audit[3939]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=3939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.494000 audit[3939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffa90b500 a2=0 a3=1 items=0 ppid=3794 pid=3939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:46:00.495000 audit[3940]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=3940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.495000 audit[3940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff03cea10 a2=0 a3=1 items=0 ppid=3794 pid=3940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:46:00.497000 audit[3942]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=3942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.497000 audit[3942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffd15efe0 a2=0 a3=1 items=0 ppid=3794 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.497000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:46:00.500000 audit[3945]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=3945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:46:00.500000 audit[3945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff1490460 a2=0 a3=1 items=0 ppid=3794 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.500000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:46:00.503000 audit[3947]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=3947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:46:00.503000 audit[3947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd592fd60 a2=0 a3=1 items=0 ppid=3794 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.503000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:00.503000 audit[3947]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=3947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:46:00.503000 audit[3947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd592fd60 a2=0 a3=1 items=0 ppid=3794 pid=3947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:00.503000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:00.629619 kubelet[3656]: I1216 12:46:00.628897 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lfq99" podStartSLOduration=1.628879259 podStartE2EDuration="1.628879259s" podCreationTimestamp="2025-12-16 12:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:00.628815816 +0000 UTC m=+7.137469479" watchObservedRunningTime="2025-12-16 12:46:00.628879259 +0000 UTC m=+7.137532921" Dec 16 12:46:01.849286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509852943.mount: Deactivated successfully. Dec 16 12:46:02.329699 containerd[2141]: time="2025-12-16T12:46:02.329650032Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:02.333025 containerd[2141]: time="2025-12-16T12:46:02.332970069Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 12:46:02.335525 containerd[2141]: time="2025-12-16T12:46:02.335477705Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:02.338724 containerd[2141]: time="2025-12-16T12:46:02.338673323Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:02.339184 containerd[2141]: time="2025-12-16T12:46:02.339158994Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.207244865s" Dec 16 12:46:02.339259 containerd[2141]: time="2025-12-16T12:46:02.339247052Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:46:02.341999 containerd[2141]: time="2025-12-16T12:46:02.341970991Z" level=info msg="CreateContainer within sandbox \"76342e18df160c5f25d3c3303d404c5a88b86be46e150a508917b982dce0a567\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:46:02.359540 containerd[2141]: time="2025-12-16T12:46:02.358997373Z" level=info msg="Container 9274623cc99d39fb7dc3104aded49beb000bc8900dd09f907903958a388a83b2: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:02.359884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880375794.mount: Deactivated successfully. Dec 16 12:46:02.385241 containerd[2141]: time="2025-12-16T12:46:02.385195290Z" level=info msg="CreateContainer within sandbox \"76342e18df160c5f25d3c3303d404c5a88b86be46e150a508917b982dce0a567\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9274623cc99d39fb7dc3104aded49beb000bc8900dd09f907903958a388a83b2\"" Dec 16 12:46:02.385811 containerd[2141]: time="2025-12-16T12:46:02.385780028Z" level=info msg="StartContainer for \"9274623cc99d39fb7dc3104aded49beb000bc8900dd09f907903958a388a83b2\"" Dec 16 12:46:02.386963 containerd[2141]: time="2025-12-16T12:46:02.386923223Z" level=info msg="connecting to shim 9274623cc99d39fb7dc3104aded49beb000bc8900dd09f907903958a388a83b2" address="unix:///run/containerd/s/fc60135b876cfcdfcaf17b26895f58c5e8397e6a58f6676b53554d606011105b" protocol=ttrpc version=3 Dec 16 12:46:02.405291 systemd[1]: Started cri-containerd-9274623cc99d39fb7dc3104aded49beb000bc8900dd09f907903958a388a83b2.scope - libcontainer container 9274623cc99d39fb7dc3104aded49beb000bc8900dd09f907903958a388a83b2. Dec 16 12:46:02.414000 audit: BPF prog-id=170 op=LOAD Dec 16 12:46:02.414000 audit: BPF prog-id=171 op=LOAD Dec 16 12:46:02.414000 audit[3956]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3751 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373436323363633939643339666237646333313034616465643439 Dec 16 12:46:02.415000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:46:02.415000 audit[3956]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3751 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373436323363633939643339666237646333313034616465643439 Dec 16 12:46:02.416000 audit: BPF prog-id=172 op=LOAD Dec 16 12:46:02.416000 audit[3956]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3751 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373436323363633939643339666237646333313034616465643439 Dec 16 12:46:02.417000 audit: BPF prog-id=173 op=LOAD Dec 16 12:46:02.417000 audit[3956]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3751 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373436323363633939643339666237646333313034616465643439 Dec 16 12:46:02.417000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:46:02.417000 audit[3956]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3751 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373436323363633939643339666237646333313034616465643439 Dec 16 12:46:02.417000 audit: BPF prog-id=172 op=UNLOAD Dec 16 12:46:02.417000 audit[3956]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3751 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373436323363633939643339666237646333313034616465643439 Dec 16 12:46:02.418000 audit: BPF prog-id=174 op=LOAD Dec 16 12:46:02.418000 audit[3956]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3751 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:02.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3932373436323363633939643339666237646333313034616465643439 Dec 16 12:46:02.441248 containerd[2141]: time="2025-12-16T12:46:02.439901659Z" level=info msg="StartContainer for \"9274623cc99d39fb7dc3104aded49beb000bc8900dd09f907903958a388a83b2\" returns successfully" Dec 16 12:46:02.907026 kubelet[3656]: I1216 12:46:02.906822 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dq4bn" podStartSLOduration=1.696770045 podStartE2EDuration="3.906803385s" podCreationTimestamp="2025-12-16 12:45:59 +0000 UTC" firstStartedPulling="2025-12-16 12:46:00.130013041 +0000 UTC m=+6.638666703" lastFinishedPulling="2025-12-16 12:46:02.340046373 +0000 UTC m=+8.848700043" observedRunningTime="2025-12-16 12:46:02.627812016 +0000 UTC m=+9.136465678" watchObservedRunningTime="2025-12-16 12:46:02.906803385 +0000 UTC m=+9.415457047" Dec 16 12:46:07.661253 sudo[2635]: pam_unix(sudo:session): session closed for user root Dec 16 12:46:07.660000 audit[2635]: USER_END pid=2635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:46:07.665159 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 12:46:07.665271 kernel: audit: type=1106 audit(1765889167.660:545): pid=2635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:46:07.660000 audit[2635]: CRED_DISP pid=2635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:46:07.698167 kernel: audit: type=1104 audit(1765889167.660:546): pid=2635 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:46:07.739233 sshd[2634]: Connection closed by 10.200.16.10 port 35398 Dec 16 12:46:07.740097 sshd-session[2631]: pam_unix(sshd:session): session closed for user core Dec 16 12:46:07.742000 audit[2631]: USER_END pid=2631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:46:07.745372 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:46:07.745676 systemd[1]: session-9.scope: Consumed 2.849s CPU time, 219.8M memory peak. Dec 16 12:46:07.762436 systemd[1]: sshd@6-10.200.20.49:22-10.200.16.10:35398.service: Deactivated successfully. Dec 16 12:46:07.742000 audit[2631]: CRED_DISP pid=2631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:46:07.781488 kernel: audit: type=1106 audit(1765889167.742:547): pid=2631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:46:07.781646 kernel: audit: type=1104 audit(1765889167.742:548): pid=2631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:46:07.784834 systemd-logind[2108]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:46:07.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.49:22-10.200.16.10:35398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:46:07.786654 systemd-logind[2108]: Removed session 9. Dec 16 12:46:07.799233 kernel: audit: type=1131 audit(1765889167.762:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.49:22-10.200.16.10:35398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:46:09.157000 audit[4038]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:09.157000 audit[4038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcfadd710 a2=0 a3=1 items=0 ppid=3794 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.192910 kernel: audit: type=1325 audit(1765889169.157:550): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:09.193056 kernel: audit: type=1300 audit(1765889169.157:550): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcfadd710 a2=0 a3=1 items=0 ppid=3794 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:09.204264 kernel: audit: type=1327 audit(1765889169.157:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:09.172000 audit[4038]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:09.217643 kernel: audit: type=1325 audit(1765889169.172:551): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:09.172000 audit[4038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcfadd710 a2=0 a3=1 items=0 ppid=3794 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.172000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:09.253284 kernel: audit: type=1300 audit(1765889169.172:551): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcfadd710 a2=0 a3=1 items=0 ppid=3794 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.254000 audit[4040]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:09.254000 audit[4040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc5a7edb0 a2=0 a3=1 items=0 ppid=3794 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:09.260000 audit[4040]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:09.260000 audit[4040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5a7edb0 a2=0 a3=1 items=0 ppid=3794 pid=4040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:09.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:13.016000 audit[4042]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:13.021119 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:46:13.021250 kernel: audit: type=1325 audit(1765889173.016:554): table=filter:112 family=2 entries=17 op=nft_register_rule pid=4042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:13.016000 audit[4042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffeaf6e3b0 a2=0 a3=1 items=0 ppid=3794 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.058122 kernel: audit: type=1300 audit(1765889173.016:554): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffeaf6e3b0 a2=0 a3=1 items=0 ppid=3794 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.016000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:13.058000 audit[4042]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:13.082101 kernel: audit: type=1327 audit(1765889173.016:554): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:13.082242 kernel: audit: type=1325 audit(1765889173.058:555): table=nat:113 family=2 entries=12 op=nft_register_rule pid=4042 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:13.058000 audit[4042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeaf6e3b0 a2=0 a3=1 items=0 ppid=3794 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:13.119844 kernel: audit: type=1300 audit(1765889173.058:555): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeaf6e3b0 a2=0 a3=1 items=0 ppid=3794 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.119983 kernel: audit: type=1327 audit(1765889173.058:555): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:13.128000 audit[4044]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:13.141128 kernel: audit: type=1325 audit(1765889173.128:556): table=filter:114 family=2 entries=18 op=nft_register_rule pid=4044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:13.128000 audit[4044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe653a4b0 a2=0 a3=1 items=0 ppid=3794 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:13.170835 kernel: audit: type=1300 audit(1765889173.128:556): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe653a4b0 a2=0 a3=1 items=0 ppid=3794 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.170975 kernel: audit: type=1327 audit(1765889173.128:556): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:13.161000 audit[4044]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:13.180071 kernel: audit: type=1325 audit(1765889173.161:557): table=nat:115 family=2 entries=12 op=nft_register_rule pid=4044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:13.161000 audit[4044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe653a4b0 a2=0 a3=1 items=0 ppid=3794 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:13.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:14.193000 audit[4048]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:14.193000 audit[4048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffb34f370 a2=0 a3=1 items=0 ppid=3794 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:14.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:14.198000 audit[4048]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:14.198000 audit[4048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb34f370 a2=0 a3=1 items=0 ppid=3794 pid=4048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:14.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:15.202000 audit[4050]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:15.202000 audit[4050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd0243400 a2=0 a3=1 items=0 ppid=3794 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:15.208000 audit[4050]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4050 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:15.208000 audit[4050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd0243400 a2=0 a3=1 items=0 ppid=3794 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:15.259818 systemd[1]: Created slice kubepods-besteffort-pod28b9de1f_ec63_40d6_9349_fd412ac5e8bd.slice - libcontainer container kubepods-besteffort-pod28b9de1f_ec63_40d6_9349_fd412ac5e8bd.slice. Dec 16 12:46:15.402218 kubelet[3656]: I1216 12:46:15.402174 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28b9de1f-ec63-40d6-9349-fd412ac5e8bd-tigera-ca-bundle\") pod \"calico-typha-8fc8b6c86-tv6hf\" (UID: \"28b9de1f-ec63-40d6-9349-fd412ac5e8bd\") " pod="calico-system/calico-typha-8fc8b6c86-tv6hf" Dec 16 12:46:15.402218 kubelet[3656]: I1216 12:46:15.402215 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwwgd\" (UniqueName: \"kubernetes.io/projected/28b9de1f-ec63-40d6-9349-fd412ac5e8bd-kube-api-access-dwwgd\") pod \"calico-typha-8fc8b6c86-tv6hf\" (UID: \"28b9de1f-ec63-40d6-9349-fd412ac5e8bd\") " pod="calico-system/calico-typha-8fc8b6c86-tv6hf" Dec 16 12:46:15.402218 kubelet[3656]: I1216 12:46:15.402233 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/28b9de1f-ec63-40d6-9349-fd412ac5e8bd-typha-certs\") pod \"calico-typha-8fc8b6c86-tv6hf\" (UID: \"28b9de1f-ec63-40d6-9349-fd412ac5e8bd\") " pod="calico-system/calico-typha-8fc8b6c86-tv6hf" Dec 16 12:46:15.469607 systemd[1]: Created slice kubepods-besteffort-pod3b66db1c_5db1_4f3e_a579_d25d51c88422.slice - libcontainer container kubepods-besteffort-pod3b66db1c_5db1_4f3e_a579_d25d51c88422.slice. Dec 16 12:46:15.563679 containerd[2141]: time="2025-12-16T12:46:15.563632560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8fc8b6c86-tv6hf,Uid:28b9de1f-ec63-40d6-9349-fd412ac5e8bd,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:15.603335 kubelet[3656]: I1216 12:46:15.603180 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-cni-bin-dir\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603335 kubelet[3656]: I1216 12:46:15.603221 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-var-run-calico\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603335 kubelet[3656]: I1216 12:46:15.603234 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmprj\" (UniqueName: \"kubernetes.io/projected/3b66db1c-5db1-4f3e-a579-d25d51c88422-kube-api-access-kmprj\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603335 kubelet[3656]: I1216 12:46:15.603248 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-policysync\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603335 kubelet[3656]: I1216 12:46:15.603260 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-flexvol-driver-host\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603563 kubelet[3656]: I1216 12:46:15.603275 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b66db1c-5db1-4f3e-a579-d25d51c88422-tigera-ca-bundle\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603563 kubelet[3656]: I1216 12:46:15.603286 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3b66db1c-5db1-4f3e-a579-d25d51c88422-node-certs\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603563 kubelet[3656]: I1216 12:46:15.603301 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-var-lib-calico\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603563 kubelet[3656]: I1216 12:46:15.603344 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-cni-net-dir\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603563 kubelet[3656]: I1216 12:46:15.603376 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-xtables-lock\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603641 kubelet[3656]: I1216 12:46:15.603397 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-cni-log-dir\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.603641 kubelet[3656]: I1216 12:46:15.603410 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b66db1c-5db1-4f3e-a579-d25d51c88422-lib-modules\") pod \"calico-node-x9mq7\" (UID: \"3b66db1c-5db1-4f3e-a579-d25d51c88422\") " pod="calico-system/calico-node-x9mq7" Dec 16 12:46:15.618121 containerd[2141]: time="2025-12-16T12:46:15.617272761Z" level=info msg="connecting to shim 63c29fa153183903e95fd2c8b28ef97372f90b8aaa1df4beb77bc90b577a92aa" address="unix:///run/containerd/s/11a053f832777de259bc80f411345425eece64e59823819f60cbe61e79d01b93" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:15.648437 systemd[1]: Started cri-containerd-63c29fa153183903e95fd2c8b28ef97372f90b8aaa1df4beb77bc90b577a92aa.scope - libcontainer container 63c29fa153183903e95fd2c8b28ef97372f90b8aaa1df4beb77bc90b577a92aa. Dec 16 12:46:15.675132 kubelet[3656]: E1216 12:46:15.674007 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:46:15.681000 audit: BPF prog-id=175 op=LOAD Dec 16 12:46:15.681000 audit: BPF prog-id=176 op=LOAD Dec 16 12:46:15.681000 audit[4073]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633239666131353331383339303365393566643263386232386566 Dec 16 12:46:15.681000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:46:15.681000 audit[4073]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633239666131353331383339303365393566643263386232386566 Dec 16 12:46:15.682000 audit: BPF prog-id=177 op=LOAD Dec 16 12:46:15.682000 audit[4073]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633239666131353331383339303365393566643263386232386566 Dec 16 12:46:15.682000 audit: BPF prog-id=178 op=LOAD Dec 16 12:46:15.682000 audit[4073]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633239666131353331383339303365393566643263386232386566 Dec 16 12:46:15.682000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:46:15.682000 audit[4073]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633239666131353331383339303365393566643263386232386566 Dec 16 12:46:15.682000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:46:15.682000 audit[4073]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633239666131353331383339303365393566643263386232386566 Dec 16 12:46:15.682000 audit: BPF prog-id=179 op=LOAD Dec 16 12:46:15.682000 audit[4073]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4062 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633633239666131353331383339303365393566643263386232386566 Dec 16 12:46:15.707895 kubelet[3656]: E1216 12:46:15.707854 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.707895 kubelet[3656]: W1216 12:46:15.707879 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.707895 kubelet[3656]: E1216 12:46:15.707902 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.718546 kubelet[3656]: E1216 12:46:15.718510 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.718784 kubelet[3656]: W1216 12:46:15.718619 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.718784 kubelet[3656]: E1216 12:46:15.718646 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.719550 containerd[2141]: time="2025-12-16T12:46:15.719500274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8fc8b6c86-tv6hf,Uid:28b9de1f-ec63-40d6-9349-fd412ac5e8bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"63c29fa153183903e95fd2c8b28ef97372f90b8aaa1df4beb77bc90b577a92aa\"" Dec 16 12:46:15.723243 containerd[2141]: time="2025-12-16T12:46:15.722573712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:46:15.728284 kubelet[3656]: E1216 12:46:15.728183 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.728284 kubelet[3656]: W1216 12:46:15.728213 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.728284 kubelet[3656]: E1216 12:46:15.728245 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.729654 kubelet[3656]: E1216 12:46:15.729638 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.729816 kubelet[3656]: W1216 12:46:15.729730 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.729816 kubelet[3656]: E1216 12:46:15.729748 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.730784 kubelet[3656]: E1216 12:46:15.730211 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.730931 kubelet[3656]: W1216 12:46:15.730226 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.730931 kubelet[3656]: E1216 12:46:15.730896 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.731303 kubelet[3656]: E1216 12:46:15.731285 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.731446 kubelet[3656]: W1216 12:46:15.731375 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.731446 kubelet[3656]: E1216 12:46:15.731390 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.731745 kubelet[3656]: E1216 12:46:15.731673 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.731745 kubelet[3656]: W1216 12:46:15.731688 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.731745 kubelet[3656]: E1216 12:46:15.731700 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.732059 kubelet[3656]: E1216 12:46:15.732010 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.732059 kubelet[3656]: W1216 12:46:15.732021 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.732059 kubelet[3656]: E1216 12:46:15.732030 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.733484 kubelet[3656]: E1216 12:46:15.732778 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.733484 kubelet[3656]: W1216 12:46:15.733412 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.733484 kubelet[3656]: E1216 12:46:15.733432 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.733825 kubelet[3656]: E1216 12:46:15.733760 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.733825 kubelet[3656]: W1216 12:46:15.733771 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.733825 kubelet[3656]: E1216 12:46:15.733783 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.734150 kubelet[3656]: E1216 12:46:15.734138 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.734293 kubelet[3656]: W1216 12:46:15.734216 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.734293 kubelet[3656]: E1216 12:46:15.734232 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.734586 kubelet[3656]: E1216 12:46:15.734495 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.734586 kubelet[3656]: W1216 12:46:15.734516 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.734586 kubelet[3656]: E1216 12:46:15.734526 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.734839 kubelet[3656]: E1216 12:46:15.734799 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.734839 kubelet[3656]: W1216 12:46:15.734810 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.734967 kubelet[3656]: E1216 12:46:15.734819 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.735208 kubelet[3656]: E1216 12:46:15.735190 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.735336 kubelet[3656]: W1216 12:46:15.735275 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.735336 kubelet[3656]: E1216 12:46:15.735292 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.735540 kubelet[3656]: E1216 12:46:15.735529 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.735669 kubelet[3656]: W1216 12:46:15.735588 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.735669 kubelet[3656]: E1216 12:46:15.735599 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.735864 kubelet[3656]: E1216 12:46:15.735850 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.735978 kubelet[3656]: W1216 12:46:15.735930 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.735978 kubelet[3656]: E1216 12:46:15.735944 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.736166 kubelet[3656]: E1216 12:46:15.736156 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.736341 kubelet[3656]: W1216 12:46:15.736203 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.736341 kubelet[3656]: E1216 12:46:15.736215 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.736445 kubelet[3656]: E1216 12:46:15.736437 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.736510 kubelet[3656]: W1216 12:46:15.736488 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.736556 kubelet[3656]: E1216 12:46:15.736546 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.736800 kubelet[3656]: E1216 12:46:15.736738 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.736800 kubelet[3656]: W1216 12:46:15.736748 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.736800 kubelet[3656]: E1216 12:46:15.736756 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.737015 kubelet[3656]: E1216 12:46:15.737005 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.737152 kubelet[3656]: W1216 12:46:15.737073 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.737152 kubelet[3656]: E1216 12:46:15.737117 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.737342 kubelet[3656]: E1216 12:46:15.737332 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.737438 kubelet[3656]: W1216 12:46:15.737389 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.737438 kubelet[3656]: E1216 12:46:15.737403 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.737659 kubelet[3656]: E1216 12:46:15.737649 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.737819 kubelet[3656]: W1216 12:46:15.737721 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.737819 kubelet[3656]: E1216 12:46:15.737736 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.737929 kubelet[3656]: E1216 12:46:15.737920 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.738042 kubelet[3656]: W1216 12:46:15.737959 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.738042 kubelet[3656]: E1216 12:46:15.737970 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.775725 containerd[2141]: time="2025-12-16T12:46:15.775684241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x9mq7,Uid:3b66db1c-5db1-4f3e-a579-d25d51c88422,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:15.804339 kubelet[3656]: E1216 12:46:15.804288 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.804603 kubelet[3656]: W1216 12:46:15.804319 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.804603 kubelet[3656]: E1216 12:46:15.804496 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.804603 kubelet[3656]: I1216 12:46:15.804532 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c8d2f0f9-d4bf-424e-80b4-888570287c6a-socket-dir\") pod \"csi-node-driver-q9ldj\" (UID: \"c8d2f0f9-d4bf-424e-80b4-888570287c6a\") " pod="calico-system/csi-node-driver-q9ldj" Dec 16 12:46:15.805226 kubelet[3656]: E1216 12:46:15.805046 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.805226 kubelet[3656]: W1216 12:46:15.805063 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.805448 kubelet[3656]: E1216 12:46:15.805329 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.805448 kubelet[3656]: I1216 12:46:15.805356 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mpmm\" (UniqueName: \"kubernetes.io/projected/c8d2f0f9-d4bf-424e-80b4-888570287c6a-kube-api-access-4mpmm\") pod \"csi-node-driver-q9ldj\" (UID: \"c8d2f0f9-d4bf-424e-80b4-888570287c6a\") " pod="calico-system/csi-node-driver-q9ldj" Dec 16 12:46:15.805448 kubelet[3656]: E1216 12:46:15.805376 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.805448 kubelet[3656]: W1216 12:46:15.805388 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.805448 kubelet[3656]: E1216 12:46:15.805401 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.805731 kubelet[3656]: E1216 12:46:15.805715 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.805731 kubelet[3656]: W1216 12:46:15.805726 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.805826 kubelet[3656]: E1216 12:46:15.805738 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.806145 kubelet[3656]: E1216 12:46:15.806075 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.806145 kubelet[3656]: W1216 12:46:15.806109 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.806145 kubelet[3656]: E1216 12:46:15.806120 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.806527 kubelet[3656]: E1216 12:46:15.806498 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.806527 kubelet[3656]: W1216 12:46:15.806509 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.806527 kubelet[3656]: E1216 12:46:15.806519 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.807264 kubelet[3656]: E1216 12:46:15.807204 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.807264 kubelet[3656]: W1216 12:46:15.807221 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.807264 kubelet[3656]: E1216 12:46:15.807233 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.807264 kubelet[3656]: I1216 12:46:15.807255 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c8d2f0f9-d4bf-424e-80b4-888570287c6a-registration-dir\") pod \"csi-node-driver-q9ldj\" (UID: \"c8d2f0f9-d4bf-424e-80b4-888570287c6a\") " pod="calico-system/csi-node-driver-q9ldj" Dec 16 12:46:15.807478 kubelet[3656]: E1216 12:46:15.807441 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.807478 kubelet[3656]: W1216 12:46:15.807451 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.807478 kubelet[3656]: E1216 12:46:15.807460 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.807478 kubelet[3656]: I1216 12:46:15.807473 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8d2f0f9-d4bf-424e-80b4-888570287c6a-kubelet-dir\") pod \"csi-node-driver-q9ldj\" (UID: \"c8d2f0f9-d4bf-424e-80b4-888570287c6a\") " pod="calico-system/csi-node-driver-q9ldj" Dec 16 12:46:15.808013 kubelet[3656]: E1216 12:46:15.807990 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.809041 kubelet[3656]: W1216 12:46:15.808004 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.809041 kubelet[3656]: E1216 12:46:15.808590 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.809041 kubelet[3656]: I1216 12:46:15.808612 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c8d2f0f9-d4bf-424e-80b4-888570287c6a-varrun\") pod \"csi-node-driver-q9ldj\" (UID: \"c8d2f0f9-d4bf-424e-80b4-888570287c6a\") " pod="calico-system/csi-node-driver-q9ldj" Dec 16 12:46:15.810048 kubelet[3656]: E1216 12:46:15.809808 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.810048 kubelet[3656]: W1216 12:46:15.809847 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.810048 kubelet[3656]: E1216 12:46:15.809866 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.810048 kubelet[3656]: E1216 12:46:15.810014 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.810048 kubelet[3656]: W1216 12:46:15.810022 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.810048 kubelet[3656]: E1216 12:46:15.810034 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.810206 kubelet[3656]: E1216 12:46:15.810189 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.810227 kubelet[3656]: W1216 12:46:15.810207 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.810244 kubelet[3656]: E1216 12:46:15.810216 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.810398 kubelet[3656]: E1216 12:46:15.810366 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.810398 kubelet[3656]: W1216 12:46:15.810393 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.811251 kubelet[3656]: E1216 12:46:15.810401 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.811251 kubelet[3656]: E1216 12:46:15.810525 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.811251 kubelet[3656]: W1216 12:46:15.810547 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.811251 kubelet[3656]: E1216 12:46:15.810556 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.811251 kubelet[3656]: E1216 12:46:15.810671 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.811251 kubelet[3656]: W1216 12:46:15.810679 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.811251 kubelet[3656]: E1216 12:46:15.810700 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.829505 containerd[2141]: time="2025-12-16T12:46:15.829458687Z" level=info msg="connecting to shim 4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b" address="unix:///run/containerd/s/8ec3e0ae93bebcfca6b9fac8015acda88d75aa08bfbf4524787d6a5549a366e0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:15.849316 systemd[1]: Started cri-containerd-4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b.scope - libcontainer container 4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b. Dec 16 12:46:15.856000 audit: BPF prog-id=180 op=LOAD Dec 16 12:46:15.857000 audit: BPF prog-id=181 op=LOAD Dec 16 12:46:15.857000 audit[4158]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461633166306130636664353064373565346334626163623962323466 Dec 16 12:46:15.858000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:46:15.858000 audit[4158]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461633166306130636664353064373565346334626163623962323466 Dec 16 12:46:15.858000 audit: BPF prog-id=182 op=LOAD Dec 16 12:46:15.858000 audit[4158]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461633166306130636664353064373565346334626163623962323466 Dec 16 12:46:15.858000 audit: BPF prog-id=183 op=LOAD Dec 16 12:46:15.858000 audit[4158]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461633166306130636664353064373565346334626163623962323466 Dec 16 12:46:15.858000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:46:15.858000 audit[4158]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461633166306130636664353064373565346334626163623962323466 Dec 16 12:46:15.858000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:46:15.858000 audit[4158]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461633166306130636664353064373565346334626163623962323466 Dec 16 12:46:15.858000 audit: BPF prog-id=184 op=LOAD Dec 16 12:46:15.858000 audit[4158]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4146 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:15.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461633166306130636664353064373565346334626163623962323466 Dec 16 12:46:15.876883 containerd[2141]: time="2025-12-16T12:46:15.876839025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x9mq7,Uid:3b66db1c-5db1-4f3e-a579-d25d51c88422,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b\"" Dec 16 12:46:15.909586 kubelet[3656]: E1216 12:46:15.909475 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.910025 kubelet[3656]: W1216 12:46:15.909752 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.910025 kubelet[3656]: E1216 12:46:15.909780 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.910206 kubelet[3656]: E1216 12:46:15.910074 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.910206 kubelet[3656]: W1216 12:46:15.910118 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.910206 kubelet[3656]: E1216 12:46:15.910130 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.910367 kubelet[3656]: E1216 12:46:15.910268 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.910367 kubelet[3656]: W1216 12:46:15.910275 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.910367 kubelet[3656]: E1216 12:46:15.910287 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.910767 kubelet[3656]: E1216 12:46:15.910683 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.910767 kubelet[3656]: W1216 12:46:15.910695 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.910767 kubelet[3656]: E1216 12:46:15.910714 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.910857 kubelet[3656]: E1216 12:46:15.910839 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.910857 kubelet[3656]: W1216 12:46:15.910849 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.910913 kubelet[3656]: E1216 12:46:15.910869 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.911064 kubelet[3656]: E1216 12:46:15.911051 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.911064 kubelet[3656]: W1216 12:46:15.911060 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.911205 kubelet[3656]: E1216 12:46:15.911071 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.911590 kubelet[3656]: E1216 12:46:15.911477 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.911590 kubelet[3656]: W1216 12:46:15.911573 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.911906 kubelet[3656]: E1216 12:46:15.911713 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.912493 kubelet[3656]: E1216 12:46:15.912419 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.912493 kubelet[3656]: W1216 12:46:15.912431 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.912573 kubelet[3656]: E1216 12:46:15.912507 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.913177 kubelet[3656]: E1216 12:46:15.913154 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.913345 kubelet[3656]: W1216 12:46:15.913243 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.913345 kubelet[3656]: E1216 12:46:15.913280 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.913493 kubelet[3656]: E1216 12:46:15.913447 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.913985 kubelet[3656]: W1216 12:46:15.913963 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.914147 kubelet[3656]: E1216 12:46:15.914060 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.914694 kubelet[3656]: E1216 12:46:15.914419 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.914865 kubelet[3656]: W1216 12:46:15.914780 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.914865 kubelet[3656]: E1216 12:46:15.914821 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.916160 kubelet[3656]: E1216 12:46:15.915114 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.916383 kubelet[3656]: W1216 12:46:15.916238 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.916383 kubelet[3656]: E1216 12:46:15.916335 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.916773 kubelet[3656]: E1216 12:46:15.916759 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.917020 kubelet[3656]: W1216 12:46:15.916955 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.917020 kubelet[3656]: E1216 12:46:15.916993 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.917588 kubelet[3656]: E1216 12:46:15.917490 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.917588 kubelet[3656]: W1216 12:46:15.917507 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.917820 kubelet[3656]: E1216 12:46:15.917786 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.917820 kubelet[3656]: W1216 12:46:15.917799 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.918383 kubelet[3656]: E1216 12:46:15.918370 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.918462 kubelet[3656]: E1216 12:46:15.918440 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.918462 kubelet[3656]: E1216 12:46:15.918460 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.918570 kubelet[3656]: W1216 12:46:15.918556 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.918960 kubelet[3656]: E1216 12:46:15.918889 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.918960 kubelet[3656]: W1216 12:46:15.918908 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.919421 kubelet[3656]: E1216 12:46:15.919404 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.919421 kubelet[3656]: E1216 12:46:15.919421 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.919771 kubelet[3656]: E1216 12:46:15.919755 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.920078 kubelet[3656]: W1216 12:46:15.919831 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.920078 kubelet[3656]: E1216 12:46:15.919850 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.920540 kubelet[3656]: E1216 12:46:15.920505 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.921225 kubelet[3656]: W1216 12:46:15.921178 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.921358 kubelet[3656]: E1216 12:46:15.921327 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.922972 kubelet[3656]: E1216 12:46:15.922861 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.922972 kubelet[3656]: W1216 12:46:15.922881 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.923116 kubelet[3656]: E1216 12:46:15.923105 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.923385 kubelet[3656]: E1216 12:46:15.923257 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.923385 kubelet[3656]: W1216 12:46:15.923268 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.923495 kubelet[3656]: E1216 12:46:15.923481 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.923616 kubelet[3656]: E1216 12:46:15.923590 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.923732 kubelet[3656]: W1216 12:46:15.923654 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.923945 kubelet[3656]: E1216 12:46:15.923919 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.924062 kubelet[3656]: E1216 12:46:15.924047 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.924174 kubelet[3656]: W1216 12:46:15.924151 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.924340 kubelet[3656]: E1216 12:46:15.924211 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.924545 kubelet[3656]: E1216 12:46:15.924533 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.924676 kubelet[3656]: W1216 12:46:15.924600 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.924676 kubelet[3656]: E1216 12:46:15.924615 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.924919 kubelet[3656]: E1216 12:46:15.924906 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.925067 kubelet[3656]: W1216 12:46:15.924955 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.925067 kubelet[3656]: E1216 12:46:15.924977 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:15.925340 kubelet[3656]: E1216 12:46:15.925326 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:15.925460 kubelet[3656]: W1216 12:46:15.925378 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:15.925460 kubelet[3656]: E1216 12:46:15.925391 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:16.221000 audit[4213]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:16.221000 audit[4213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdf277490 a2=0 a3=1 items=0 ppid=3794 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:16.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:16.224000 audit[4213]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:16.224000 audit[4213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdf277490 a2=0 a3=1 items=0 ppid=3794 pid=4213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:16.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:17.023200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769697660.mount: Deactivated successfully. Dec 16 12:46:17.441847 containerd[2141]: time="2025-12-16T12:46:17.441696502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:17.444852 containerd[2141]: time="2025-12-16T12:46:17.444799828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33086690" Dec 16 12:46:17.447368 containerd[2141]: time="2025-12-16T12:46:17.447336161Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:17.450832 containerd[2141]: time="2025-12-16T12:46:17.450794265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:17.451245 containerd[2141]: time="2025-12-16T12:46:17.451064345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.727854934s" Dec 16 12:46:17.451245 containerd[2141]: time="2025-12-16T12:46:17.451113347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:46:17.452918 containerd[2141]: time="2025-12-16T12:46:17.452884208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:46:17.466931 containerd[2141]: time="2025-12-16T12:46:17.466714027Z" level=info msg="CreateContainer within sandbox \"63c29fa153183903e95fd2c8b28ef97372f90b8aaa1df4beb77bc90b577a92aa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:46:17.488116 containerd[2141]: time="2025-12-16T12:46:17.487463118Z" level=info msg="Container 196bdf7e16ed9babf8d901823a6fea934685fc92b21d646e30849852a1371a05: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:17.504793 containerd[2141]: time="2025-12-16T12:46:17.504745056Z" level=info msg="CreateContainer within sandbox \"63c29fa153183903e95fd2c8b28ef97372f90b8aaa1df4beb77bc90b577a92aa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"196bdf7e16ed9babf8d901823a6fea934685fc92b21d646e30849852a1371a05\"" Dec 16 12:46:17.507263 containerd[2141]: time="2025-12-16T12:46:17.507235092Z" level=info msg="StartContainer for \"196bdf7e16ed9babf8d901823a6fea934685fc92b21d646e30849852a1371a05\"" Dec 16 12:46:17.509394 containerd[2141]: time="2025-12-16T12:46:17.509358332Z" level=info msg="connecting to shim 196bdf7e16ed9babf8d901823a6fea934685fc92b21d646e30849852a1371a05" address="unix:///run/containerd/s/11a053f832777de259bc80f411345425eece64e59823819f60cbe61e79d01b93" protocol=ttrpc version=3 Dec 16 12:46:17.527409 systemd[1]: Started cri-containerd-196bdf7e16ed9babf8d901823a6fea934685fc92b21d646e30849852a1371a05.scope - libcontainer container 196bdf7e16ed9babf8d901823a6fea934685fc92b21d646e30849852a1371a05. Dec 16 12:46:17.536000 audit: BPF prog-id=185 op=LOAD Dec 16 12:46:17.537000 audit: BPF prog-id=186 op=LOAD Dec 16 12:46:17.537000 audit[4225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4062 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:17.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139366264663765313665643962616266386439303138323361366665 Dec 16 12:46:17.537000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:46:17.537000 audit[4225]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4062 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:17.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139366264663765313665643962616266386439303138323361366665 Dec 16 12:46:17.537000 audit: BPF prog-id=187 op=LOAD Dec 16 12:46:17.537000 audit[4225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4062 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:17.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139366264663765313665643962616266386439303138323361366665 Dec 16 12:46:17.537000 audit: BPF prog-id=188 op=LOAD Dec 16 12:46:17.537000 audit[4225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4062 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:17.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139366264663765313665643962616266386439303138323361366665 Dec 16 12:46:17.537000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:46:17.537000 audit[4225]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4062 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:17.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139366264663765313665643962616266386439303138323361366665 Dec 16 12:46:17.537000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:46:17.537000 audit[4225]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4062 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:17.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139366264663765313665643962616266386439303138323361366665 Dec 16 12:46:17.537000 audit: BPF prog-id=189 op=LOAD Dec 16 12:46:17.537000 audit[4225]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4062 pid=4225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:17.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139366264663765313665643962616266386439303138323361366665 Dec 16 12:46:17.565275 containerd[2141]: time="2025-12-16T12:46:17.565141603Z" level=info msg="StartContainer for \"196bdf7e16ed9babf8d901823a6fea934685fc92b21d646e30849852a1371a05\" returns successfully" Dec 16 12:46:17.565824 kubelet[3656]: E1216 12:46:17.565271 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:46:17.690399 kubelet[3656]: I1216 12:46:17.690340 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8fc8b6c86-tv6hf" podStartSLOduration=0.960409545 podStartE2EDuration="2.690325109s" podCreationTimestamp="2025-12-16 12:46:15 +0000 UTC" firstStartedPulling="2025-12-16 12:46:15.722181564 +0000 UTC m=+22.230835234" lastFinishedPulling="2025-12-16 12:46:17.452097136 +0000 UTC m=+23.960750798" observedRunningTime="2025-12-16 12:46:17.688098065 +0000 UTC m=+24.196751735" watchObservedRunningTime="2025-12-16 12:46:17.690325109 +0000 UTC m=+24.198978779" Dec 16 12:46:17.751182 kubelet[3656]: E1216 12:46:17.751055 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.751182 kubelet[3656]: W1216 12:46:17.751094 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.751182 kubelet[3656]: E1216 12:46:17.751143 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.753173 kubelet[3656]: E1216 12:46:17.751756 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.753173 kubelet[3656]: W1216 12:46:17.751768 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.753173 kubelet[3656]: E1216 12:46:17.751809 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.753331 kubelet[3656]: E1216 12:46:17.753311 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.753358 kubelet[3656]: W1216 12:46:17.753330 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.753378 kubelet[3656]: E1216 12:46:17.753358 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.753660 kubelet[3656]: E1216 12:46:17.753641 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.753660 kubelet[3656]: W1216 12:46:17.753657 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.753807 kubelet[3656]: E1216 12:46:17.753667 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.754342 kubelet[3656]: E1216 12:46:17.754301 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.754342 kubelet[3656]: W1216 12:46:17.754317 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.754342 kubelet[3656]: E1216 12:46:17.754328 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.754657 kubelet[3656]: E1216 12:46:17.754626 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.754657 kubelet[3656]: W1216 12:46:17.754640 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.754657 kubelet[3656]: E1216 12:46:17.754650 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.754807 kubelet[3656]: E1216 12:46:17.754791 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.754807 kubelet[3656]: W1216 12:46:17.754802 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.754807 kubelet[3656]: E1216 12:46:17.754809 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.756213 kubelet[3656]: E1216 12:46:17.754932 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.756213 kubelet[3656]: W1216 12:46:17.754939 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.756213 kubelet[3656]: E1216 12:46:17.754947 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.756213 kubelet[3656]: E1216 12:46:17.755100 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.756213 kubelet[3656]: W1216 12:46:17.755114 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.756213 kubelet[3656]: E1216 12:46:17.755123 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.756213 kubelet[3656]: E1216 12:46:17.755269 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.756213 kubelet[3656]: W1216 12:46:17.755276 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.756213 kubelet[3656]: E1216 12:46:17.755284 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.756213 kubelet[3656]: E1216 12:46:17.755390 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.756366 kubelet[3656]: W1216 12:46:17.755395 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.756366 kubelet[3656]: E1216 12:46:17.755402 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.756366 kubelet[3656]: E1216 12:46:17.755532 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.756366 kubelet[3656]: W1216 12:46:17.755539 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.756366 kubelet[3656]: E1216 12:46:17.755545 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.756366 kubelet[3656]: E1216 12:46:17.755679 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.756366 kubelet[3656]: W1216 12:46:17.755684 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.756366 kubelet[3656]: E1216 12:46:17.755690 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.756366 kubelet[3656]: E1216 12:46:17.755789 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.756366 kubelet[3656]: W1216 12:46:17.755794 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.756502 kubelet[3656]: E1216 12:46:17.755800 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.756502 kubelet[3656]: E1216 12:46:17.755901 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.756502 kubelet[3656]: W1216 12:46:17.755906 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.756502 kubelet[3656]: E1216 12:46:17.755917 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.829234 kubelet[3656]: E1216 12:46:17.828898 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.829234 kubelet[3656]: W1216 12:46:17.829127 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.829234 kubelet[3656]: E1216 12:46:17.829160 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.830891 kubelet[3656]: E1216 12:46:17.830686 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.830891 kubelet[3656]: W1216 12:46:17.830767 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.830891 kubelet[3656]: E1216 12:46:17.830795 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.831435 kubelet[3656]: E1216 12:46:17.831290 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.831435 kubelet[3656]: W1216 12:46:17.831374 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.831435 kubelet[3656]: E1216 12:46:17.831414 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.831711 kubelet[3656]: E1216 12:46:17.831658 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.831711 kubelet[3656]: W1216 12:46:17.831668 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.831711 kubelet[3656]: E1216 12:46:17.831692 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.832025 kubelet[3656]: E1216 12:46:17.831958 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.832025 kubelet[3656]: W1216 12:46:17.831971 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.832025 kubelet[3656]: E1216 12:46:17.831988 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.832352 kubelet[3656]: E1216 12:46:17.832290 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.832352 kubelet[3656]: W1216 12:46:17.832301 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.832352 kubelet[3656]: E1216 12:46:17.832318 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.832665 kubelet[3656]: E1216 12:46:17.832569 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.832665 kubelet[3656]: W1216 12:46:17.832582 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.832665 kubelet[3656]: E1216 12:46:17.832601 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.832882 kubelet[3656]: E1216 12:46:17.832872 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.832973 kubelet[3656]: W1216 12:46:17.832926 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.832973 kubelet[3656]: E1216 12:46:17.832952 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.833196 kubelet[3656]: E1216 12:46:17.833174 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.833196 kubelet[3656]: W1216 12:46:17.833184 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.833359 kubelet[3656]: E1216 12:46:17.833334 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.833569 kubelet[3656]: E1216 12:46:17.833518 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.833569 kubelet[3656]: W1216 12:46:17.833529 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.833569 kubelet[3656]: E1216 12:46:17.833551 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.833854 kubelet[3656]: E1216 12:46:17.833777 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.833854 kubelet[3656]: W1216 12:46:17.833788 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.833854 kubelet[3656]: E1216 12:46:17.833805 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.834168 kubelet[3656]: E1216 12:46:17.834101 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.834168 kubelet[3656]: W1216 12:46:17.834113 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.834168 kubelet[3656]: E1216 12:46:17.834131 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.834472 kubelet[3656]: E1216 12:46:17.834390 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.834472 kubelet[3656]: W1216 12:46:17.834401 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.834472 kubelet[3656]: E1216 12:46:17.834419 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.834682 kubelet[3656]: E1216 12:46:17.834670 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.834744 kubelet[3656]: W1216 12:46:17.834734 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.834803 kubelet[3656]: E1216 12:46:17.834791 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.835545 kubelet[3656]: E1216 12:46:17.835518 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.835545 kubelet[3656]: W1216 12:46:17.835543 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.835817 kubelet[3656]: E1216 12:46:17.835560 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.835938 kubelet[3656]: E1216 12:46:17.835922 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.835938 kubelet[3656]: W1216 12:46:17.835936 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.836001 kubelet[3656]: E1216 12:46:17.835951 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.836385 kubelet[3656]: E1216 12:46:17.836283 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.836528 kubelet[3656]: W1216 12:46:17.836444 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.836528 kubelet[3656]: E1216 12:46:17.836472 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:17.836868 kubelet[3656]: E1216 12:46:17.836773 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:17.836973 kubelet[3656]: W1216 12:46:17.836929 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:17.836973 kubelet[3656]: E1216 12:46:17.836947 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.596836 containerd[2141]: time="2025-12-16T12:46:18.596293786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:18.598732 containerd[2141]: time="2025-12-16T12:46:18.598685627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:18.601394 containerd[2141]: time="2025-12-16T12:46:18.601316898Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:18.604911 containerd[2141]: time="2025-12-16T12:46:18.604853173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:18.605496 containerd[2141]: time="2025-12-16T12:46:18.605334388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.152415386s" Dec 16 12:46:18.605496 containerd[2141]: time="2025-12-16T12:46:18.605361669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:46:18.609010 containerd[2141]: time="2025-12-16T12:46:18.608976330Z" level=info msg="CreateContainer within sandbox \"4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:46:18.637173 containerd[2141]: time="2025-12-16T12:46:18.634589536Z" level=info msg="Container 2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:18.636497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520454039.mount: Deactivated successfully. Dec 16 12:46:18.663421 kubelet[3656]: I1216 12:46:18.663351 3656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:46:18.711597 containerd[2141]: time="2025-12-16T12:46:18.711488126Z" level=info msg="CreateContainer within sandbox \"4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58\"" Dec 16 12:46:18.713481 containerd[2141]: time="2025-12-16T12:46:18.712583935Z" level=info msg="StartContainer for \"2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58\"" Dec 16 12:46:18.714358 containerd[2141]: time="2025-12-16T12:46:18.714322451Z" level=info msg="connecting to shim 2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58" address="unix:///run/containerd/s/8ec3e0ae93bebcfca6b9fac8015acda88d75aa08bfbf4524787d6a5549a366e0" protocol=ttrpc version=3 Dec 16 12:46:18.738312 systemd[1]: Started cri-containerd-2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58.scope - libcontainer container 2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58. Dec 16 12:46:18.764040 kubelet[3656]: E1216 12:46:18.764006 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.764040 kubelet[3656]: W1216 12:46:18.764038 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.764293 kubelet[3656]: E1216 12:46:18.764060 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.764524 kubelet[3656]: E1216 12:46:18.764504 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.764599 kubelet[3656]: W1216 12:46:18.764525 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.764599 kubelet[3656]: E1216 12:46:18.764536 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.764881 kubelet[3656]: E1216 12:46:18.764791 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.764881 kubelet[3656]: W1216 12:46:18.764804 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.765144 kubelet[3656]: E1216 12:46:18.765120 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.765442 kubelet[3656]: E1216 12:46:18.765420 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.765442 kubelet[3656]: W1216 12:46:18.765435 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.765508 kubelet[3656]: E1216 12:46:18.765446 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.765960 kubelet[3656]: E1216 12:46:18.765937 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.766122 kubelet[3656]: W1216 12:46:18.765951 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.766122 kubelet[3656]: E1216 12:46:18.766061 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.766437 kubelet[3656]: E1216 12:46:18.766417 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.766437 kubelet[3656]: W1216 12:46:18.766431 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.766495 kubelet[3656]: E1216 12:46:18.766441 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.766764 kubelet[3656]: E1216 12:46:18.766748 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.766764 kubelet[3656]: W1216 12:46:18.766759 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.766764 kubelet[3656]: E1216 12:46:18.766769 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.767108 kubelet[3656]: E1216 12:46:18.767065 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.767421 kubelet[3656]: W1216 12:46:18.767076 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.767459 kubelet[3656]: E1216 12:46:18.767426 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.767654 kubelet[3656]: E1216 12:46:18.767639 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.767654 kubelet[3656]: W1216 12:46:18.767651 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.767717 kubelet[3656]: E1216 12:46:18.767660 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.767800 kubelet[3656]: E1216 12:46:18.767788 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.767800 kubelet[3656]: W1216 12:46:18.767796 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.767940 kubelet[3656]: E1216 12:46:18.767802 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.768205 kubelet[3656]: E1216 12:46:18.768185 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.768205 kubelet[3656]: W1216 12:46:18.768202 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.768266 kubelet[3656]: E1216 12:46:18.768212 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.768615 kubelet[3656]: E1216 12:46:18.768598 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.768615 kubelet[3656]: W1216 12:46:18.768612 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.768690 kubelet[3656]: E1216 12:46:18.768622 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.769438 kubelet[3656]: E1216 12:46:18.769419 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.769438 kubelet[3656]: W1216 12:46:18.769434 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.769525 kubelet[3656]: E1216 12:46:18.769444 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.769603 kubelet[3656]: E1216 12:46:18.769591 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.769603 kubelet[3656]: W1216 12:46:18.769600 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.769721 kubelet[3656]: E1216 12:46:18.769607 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.769846 kubelet[3656]: E1216 12:46:18.769830 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.769846 kubelet[3656]: W1216 12:46:18.769842 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.769924 kubelet[3656]: E1216 12:46:18.769851 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.785918 kernel: kauditd_printk_skb: 86 callbacks suppressed Dec 16 12:46:18.786051 kernel: audit: type=1334 audit(1765889178.777:588): prog-id=190 op=LOAD Dec 16 12:46:18.777000 audit: BPF prog-id=190 op=LOAD Dec 16 12:46:18.777000 audit[4303]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4146 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.802473 kernel: audit: type=1300 audit(1765889178.777:588): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4146 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643061386339356134366337316361393931313765313332393231 Dec 16 12:46:18.818120 kernel: audit: type=1327 audit(1765889178.777:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643061386339356134366337316361393931313765313332393231 Dec 16 12:46:18.778000 audit: BPF prog-id=191 op=LOAD Dec 16 12:46:18.822578 kernel: audit: type=1334 audit(1765889178.778:589): prog-id=191 op=LOAD Dec 16 12:46:18.778000 audit[4303]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4146 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.839038 kernel: audit: type=1300 audit(1765889178.778:589): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4146 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.840285 kubelet[3656]: E1216 12:46:18.840244 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.840627 kubelet[3656]: W1216 12:46:18.840438 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.840627 kubelet[3656]: E1216 12:46:18.840465 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.841392 kubelet[3656]: E1216 12:46:18.841323 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.841392 kubelet[3656]: W1216 12:46:18.841333 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.841392 kubelet[3656]: E1216 12:46:18.841344 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.841800 kubelet[3656]: E1216 12:46:18.841724 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.841800 kubelet[3656]: W1216 12:46:18.841735 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.841800 kubelet[3656]: E1216 12:46:18.841750 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.843570 kubelet[3656]: E1216 12:46:18.843527 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.843570 kubelet[3656]: W1216 12:46:18.843539 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.843570 kubelet[3656]: E1216 12:46:18.843551 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.843906 kubelet[3656]: E1216 12:46:18.843891 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.844065 kubelet[3656]: W1216 12:46:18.843956 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.844065 kubelet[3656]: E1216 12:46:18.843968 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.844462 kubelet[3656]: E1216 12:46:18.844426 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.844462 kubelet[3656]: W1216 12:46:18.844437 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.844817 kubelet[3656]: E1216 12:46:18.844807 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.845004 kubelet[3656]: W1216 12:46:18.844882 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.845366 kubelet[3656]: E1216 12:46:18.845308 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643061386339356134366337316361393931313765313332393231 Dec 16 12:46:18.851295 kubelet[3656]: W1216 12:46:18.845443 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.851295 kubelet[3656]: E1216 12:46:18.845460 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.851295 kubelet[3656]: E1216 12:46:18.847728 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.851295 kubelet[3656]: E1216 12:46:18.847763 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.860322 kernel: audit: type=1327 audit(1765889178.778:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643061386339356134366337316361393931313765313332393231 Dec 16 12:46:18.862852 kubelet[3656]: E1216 12:46:18.862820 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.863163 kubelet[3656]: W1216 12:46:18.863009 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.780000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:46:18.864962 kubelet[3656]: E1216 12:46:18.864116 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.867211 kernel: audit: type=1334 audit(1765889178.780:590): prog-id=191 op=UNLOAD Dec 16 12:46:18.780000 audit[4303]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.869043 kubelet[3656]: E1216 12:46:18.868263 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.869043 kubelet[3656]: W1216 12:46:18.868283 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.869043 kubelet[3656]: E1216 12:46:18.868310 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.878293 kubelet[3656]: E1216 12:46:18.878265 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.878605 kubelet[3656]: W1216 12:46:18.878459 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.878765 kubelet[3656]: E1216 12:46:18.878704 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.880230 kubelet[3656]: E1216 12:46:18.880217 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.880342 kubelet[3656]: W1216 12:46:18.880301 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.880423 kubelet[3656]: E1216 12:46:18.880393 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.882770 kernel: audit: type=1300 audit(1765889178.780:590): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.884415 kubelet[3656]: E1216 12:46:18.884371 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.884415 kubelet[3656]: W1216 12:46:18.884389 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643061386339356134366337316361393931313765313332393231 Dec 16 12:46:18.885443 kubelet[3656]: E1216 12:46:18.885008 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.887705 containerd[2141]: time="2025-12-16T12:46:18.887637797Z" level=info msg="StartContainer for \"2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58\" returns successfully" Dec 16 12:46:18.898753 kernel: audit: type=1327 audit(1765889178.780:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643061386339356134366337316361393931313765313332393231 Dec 16 12:46:18.780000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:46:18.902104 kubelet[3656]: E1216 12:46:18.901208 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.902104 kubelet[3656]: W1216 12:46:18.901231 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.903077 kubelet[3656]: E1216 12:46:18.902446 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.904545 kernel: audit: type=1334 audit(1765889178.780:591): prog-id=190 op=UNLOAD Dec 16 12:46:18.780000 audit[4303]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643061386339356134366337316361393931313765313332393231 Dec 16 12:46:18.780000 audit: BPF prog-id=192 op=LOAD Dec 16 12:46:18.780000 audit[4303]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4146 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:18.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265643061386339356134366337316361393931313765313332393231 Dec 16 12:46:18.906283 kubelet[3656]: E1216 12:46:18.905968 3656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:46:18.906283 kubelet[3656]: W1216 12:46:18.905987 3656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:46:18.906283 kubelet[3656]: E1216 12:46:18.906011 3656 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:46:18.907618 systemd[1]: cri-containerd-2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58.scope: Deactivated successfully. Dec 16 12:46:18.910000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:46:18.913979 containerd[2141]: time="2025-12-16T12:46:18.913841653Z" level=info msg="received container exit event container_id:\"2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58\" id:\"2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58\" pid:4315 exited_at:{seconds:1765889178 nanos:912761597}" Dec 16 12:46:18.935536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ed0a8c95a46c71ca99117e13292174a7cf7c83eab2028262738817b78d06f58-rootfs.mount: Deactivated successfully. Dec 16 12:46:19.564645 kubelet[3656]: E1216 12:46:19.564487 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:46:20.674230 containerd[2141]: time="2025-12-16T12:46:20.674108463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:46:21.565485 kubelet[3656]: E1216 12:46:21.565439 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:46:22.868203 containerd[2141]: time="2025-12-16T12:46:22.868130342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:22.870554 containerd[2141]: time="2025-12-16T12:46:22.870475709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 12:46:22.874565 containerd[2141]: time="2025-12-16T12:46:22.874235623Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:22.878320 containerd[2141]: time="2025-12-16T12:46:22.878273889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:22.879450 containerd[2141]: time="2025-12-16T12:46:22.879424388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.205275843s" Dec 16 12:46:22.879567 containerd[2141]: time="2025-12-16T12:46:22.879552184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:46:22.883088 containerd[2141]: time="2025-12-16T12:46:22.882305011Z" level=info msg="CreateContainer within sandbox \"4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:46:22.913483 containerd[2141]: time="2025-12-16T12:46:22.913435432Z" level=info msg="Container 7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:22.992386 containerd[2141]: time="2025-12-16T12:46:22.992331890Z" level=info msg="CreateContainer within sandbox \"4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa\"" Dec 16 12:46:22.993453 containerd[2141]: time="2025-12-16T12:46:22.993342097Z" level=info msg="StartContainer for \"7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa\"" Dec 16 12:46:22.994845 containerd[2141]: time="2025-12-16T12:46:22.994815637Z" level=info msg="connecting to shim 7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa" address="unix:///run/containerd/s/8ec3e0ae93bebcfca6b9fac8015acda88d75aa08bfbf4524787d6a5549a366e0" protocol=ttrpc version=3 Dec 16 12:46:23.015319 systemd[1]: Started cri-containerd-7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa.scope - libcontainer container 7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa. Dec 16 12:46:23.061000 audit: BPF prog-id=193 op=LOAD Dec 16 12:46:23.061000 audit[4400]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4146 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326639343963336135366532326163643031373533636564663335 Dec 16 12:46:23.061000 audit: BPF prog-id=194 op=LOAD Dec 16 12:46:23.061000 audit[4400]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4146 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326639343963336135366532326163643031373533636564663335 Dec 16 12:46:23.061000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:46:23.061000 audit[4400]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326639343963336135366532326163643031373533636564663335 Dec 16 12:46:23.061000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:46:23.061000 audit[4400]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326639343963336135366532326163643031373533636564663335 Dec 16 12:46:23.061000 audit: BPF prog-id=195 op=LOAD Dec 16 12:46:23.061000 audit[4400]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4146 pid=4400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:23.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326639343963336135366532326163643031373533636564663335 Dec 16 12:46:23.085723 containerd[2141]: time="2025-12-16T12:46:23.085675577Z" level=info msg="StartContainer for \"7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa\" returns successfully" Dec 16 12:46:23.564756 kubelet[3656]: E1216 12:46:23.564352 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:46:24.225430 containerd[2141]: time="2025-12-16T12:46:24.225379891Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:46:24.227678 systemd[1]: cri-containerd-7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa.scope: Deactivated successfully. Dec 16 12:46:24.228402 systemd[1]: cri-containerd-7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa.scope: Consumed 344ms CPU time, 185.6M memory peak, 165.9M written to disk. Dec 16 12:46:24.230889 containerd[2141]: time="2025-12-16T12:46:24.230707076Z" level=info msg="received container exit event container_id:\"7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa\" id:\"7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa\" pid:4412 exited_at:{seconds:1765889184 nanos:230334785}" Dec 16 12:46:24.240169 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 16 12:46:24.240316 kernel: audit: type=1334 audit(1765889184.231:599): prog-id=195 op=UNLOAD Dec 16 12:46:24.231000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:46:24.257879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e2f949c3a56e22acd01753cedf35a431eb9c2158b17afbc84b6433da8dff0aa-rootfs.mount: Deactivated successfully. Dec 16 12:46:24.319122 kubelet[3656]: I1216 12:46:24.317276 3656 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:46:24.670893 kubelet[3656]: I1216 12:46:24.481375 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7992b8dd-c425-405a-a189-bc8e22badaee-config-volume\") pod \"coredns-668d6bf9bc-f76h8\" (UID: \"7992b8dd-c425-405a-a189-bc8e22badaee\") " pod="kube-system/coredns-668d6bf9bc-f76h8" Dec 16 12:46:24.670893 kubelet[3656]: I1216 12:46:24.481416 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50d00cc7-1203-4290-806c-1437385334b5-config\") pod \"goldmane-666569f655-knbcd\" (UID: \"50d00cc7-1203-4290-806c-1437385334b5\") " pod="calico-system/goldmane-666569f655-knbcd" Dec 16 12:46:24.670893 kubelet[3656]: I1216 12:46:24.481435 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7lz\" (UniqueName: \"kubernetes.io/projected/cf72c120-6b19-4407-a659-b4a889422882-kube-api-access-bk7lz\") pod \"coredns-668d6bf9bc-v9rsw\" (UID: \"cf72c120-6b19-4407-a659-b4a889422882\") " pod="kube-system/coredns-668d6bf9bc-v9rsw" Dec 16 12:46:24.670893 kubelet[3656]: I1216 12:46:24.481467 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c9372ebb-481a-480c-8bf1-ba7918503e79-calico-apiserver-certs\") pod \"calico-apiserver-8447d995cc-x57ls\" (UID: \"c9372ebb-481a-480c-8bf1-ba7918503e79\") " pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" Dec 16 12:46:24.670893 kubelet[3656]: I1216 12:46:24.481486 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c9e39e8-3a67-4975-af12-07644724165b-tigera-ca-bundle\") pod \"calico-kube-controllers-6c8c58856c-hs58v\" (UID: \"1c9e39e8-3a67-4975-af12-07644724165b\") " pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" Dec 16 12:46:24.365385 systemd[1]: Created slice kubepods-besteffort-pod3442cd81_f820_467d_9da4_12b0918f9098.slice - libcontainer container kubepods-besteffort-pod3442cd81_f820_467d_9da4_12b0918f9098.slice. Dec 16 12:46:24.671523 kubelet[3656]: I1216 12:46:24.481499 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2222m\" (UniqueName: \"kubernetes.io/projected/7992b8dd-c425-405a-a189-bc8e22badaee-kube-api-access-2222m\") pod \"coredns-668d6bf9bc-f76h8\" (UID: \"7992b8dd-c425-405a-a189-bc8e22badaee\") " pod="kube-system/coredns-668d6bf9bc-f76h8" Dec 16 12:46:24.671523 kubelet[3656]: I1216 12:46:24.481508 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4be582d-98bf-4dca-8981-8263274550a3-calico-apiserver-certs\") pod \"calico-apiserver-8447d995cc-vpb8q\" (UID: \"f4be582d-98bf-4dca-8981-8263274550a3\") " pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" Dec 16 12:46:24.671523 kubelet[3656]: I1216 12:46:24.481519 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/50d00cc7-1203-4290-806c-1437385334b5-goldmane-key-pair\") pod \"goldmane-666569f655-knbcd\" (UID: \"50d00cc7-1203-4290-806c-1437385334b5\") " pod="calico-system/goldmane-666569f655-knbcd" Dec 16 12:46:24.671523 kubelet[3656]: I1216 12:46:24.481533 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf72c120-6b19-4407-a659-b4a889422882-config-volume\") pod \"coredns-668d6bf9bc-v9rsw\" (UID: \"cf72c120-6b19-4407-a659-b4a889422882\") " pod="kube-system/coredns-668d6bf9bc-v9rsw" Dec 16 12:46:24.671523 kubelet[3656]: I1216 12:46:24.481545 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9nt\" (UniqueName: \"kubernetes.io/projected/c9372ebb-481a-480c-8bf1-ba7918503e79-kube-api-access-zg9nt\") pod \"calico-apiserver-8447d995cc-x57ls\" (UID: \"c9372ebb-481a-480c-8bf1-ba7918503e79\") " pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" Dec 16 12:46:24.377745 systemd[1]: Created slice kubepods-besteffort-pod1c9e39e8_3a67_4975_af12_07644724165b.slice - libcontainer container kubepods-besteffort-pod1c9e39e8_3a67_4975_af12_07644724165b.slice. Dec 16 12:46:24.671655 kubelet[3656]: I1216 12:46:24.481558 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50d00cc7-1203-4290-806c-1437385334b5-goldmane-ca-bundle\") pod \"goldmane-666569f655-knbcd\" (UID: \"50d00cc7-1203-4290-806c-1437385334b5\") " pod="calico-system/goldmane-666569f655-knbcd" Dec 16 12:46:24.671655 kubelet[3656]: I1216 12:46:24.481570 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx7pt\" (UniqueName: \"kubernetes.io/projected/50d00cc7-1203-4290-806c-1437385334b5-kube-api-access-hx7pt\") pod \"goldmane-666569f655-knbcd\" (UID: \"50d00cc7-1203-4290-806c-1437385334b5\") " pod="calico-system/goldmane-666569f655-knbcd" Dec 16 12:46:24.671655 kubelet[3656]: I1216 12:46:24.481584 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3442cd81-f820-467d-9da4-12b0918f9098-whisker-backend-key-pair\") pod \"whisker-5498459f9d-c5ctz\" (UID: \"3442cd81-f820-467d-9da4-12b0918f9098\") " pod="calico-system/whisker-5498459f9d-c5ctz" Dec 16 12:46:24.671655 kubelet[3656]: I1216 12:46:24.481606 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcrwq\" (UniqueName: \"kubernetes.io/projected/1c9e39e8-3a67-4975-af12-07644724165b-kube-api-access-hcrwq\") pod \"calico-kube-controllers-6c8c58856c-hs58v\" (UID: \"1c9e39e8-3a67-4975-af12-07644724165b\") " pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" Dec 16 12:46:24.671655 kubelet[3656]: I1216 12:46:24.481618 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszvx\" (UniqueName: \"kubernetes.io/projected/f4be582d-98bf-4dca-8981-8263274550a3-kube-api-access-lszvx\") pod \"calico-apiserver-8447d995cc-vpb8q\" (UID: \"f4be582d-98bf-4dca-8981-8263274550a3\") " pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" Dec 16 12:46:24.392169 systemd[1]: Created slice kubepods-burstable-pod7992b8dd_c425_405a_a189_bc8e22badaee.slice - libcontainer container kubepods-burstable-pod7992b8dd_c425_405a_a189_bc8e22badaee.slice. Dec 16 12:46:24.671767 kubelet[3656]: I1216 12:46:24.481628 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3442cd81-f820-467d-9da4-12b0918f9098-whisker-ca-bundle\") pod \"whisker-5498459f9d-c5ctz\" (UID: \"3442cd81-f820-467d-9da4-12b0918f9098\") " pod="calico-system/whisker-5498459f9d-c5ctz" Dec 16 12:46:24.671767 kubelet[3656]: I1216 12:46:24.481638 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hpx\" (UniqueName: \"kubernetes.io/projected/3442cd81-f820-467d-9da4-12b0918f9098-kube-api-access-69hpx\") pod \"whisker-5498459f9d-c5ctz\" (UID: \"3442cd81-f820-467d-9da4-12b0918f9098\") " pod="calico-system/whisker-5498459f9d-c5ctz" Dec 16 12:46:24.399634 systemd[1]: Created slice kubepods-burstable-podcf72c120_6b19_4407_a659_b4a889422882.slice - libcontainer container kubepods-burstable-podcf72c120_6b19_4407_a659_b4a889422882.slice. Dec 16 12:46:24.407312 systemd[1]: Created slice kubepods-besteffort-podc9372ebb_481a_480c_8bf1_ba7918503e79.slice - libcontainer container kubepods-besteffort-podc9372ebb_481a_480c_8bf1_ba7918503e79.slice. Dec 16 12:46:24.415246 systemd[1]: Created slice kubepods-besteffort-podf4be582d_98bf_4dca_8981_8263274550a3.slice - libcontainer container kubepods-besteffort-podf4be582d_98bf_4dca_8981_8263274550a3.slice. Dec 16 12:46:24.422130 systemd[1]: Created slice kubepods-besteffort-pod50d00cc7_1203_4290_806c_1437385334b5.slice - libcontainer container kubepods-besteffort-pod50d00cc7_1203_4290_806c_1437385334b5.slice. Dec 16 12:46:24.975324 containerd[2141]: time="2025-12-16T12:46:24.974776274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5498459f9d-c5ctz,Uid:3442cd81-f820-467d-9da4-12b0918f9098,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:25.003974 containerd[2141]: time="2025-12-16T12:46:25.003691373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f76h8,Uid:7992b8dd-c425-405a-a189-bc8e22badaee,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:25.003974 containerd[2141]: time="2025-12-16T12:46:25.003958317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8c58856c-hs58v,Uid:1c9e39e8-3a67-4975-af12-07644724165b,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:25.004245 containerd[2141]: time="2025-12-16T12:46:25.003832609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8447d995cc-vpb8q,Uid:f4be582d-98bf-4dca-8981-8263274550a3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:25.004245 containerd[2141]: time="2025-12-16T12:46:25.003859690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9rsw,Uid:cf72c120-6b19-4407-a659-b4a889422882,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:25.004245 containerd[2141]: time="2025-12-16T12:46:25.003930444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8447d995cc-x57ls,Uid:c9372ebb-481a-480c-8bf1-ba7918503e79,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:25.004414 containerd[2141]: time="2025-12-16T12:46:25.004325344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-knbcd,Uid:50d00cc7-1203-4290-806c-1437385334b5,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:25.200866 containerd[2141]: time="2025-12-16T12:46:25.200813947Z" level=error msg="Failed to destroy network for sandbox \"35b4748157437f3efa6fe14a1a45ca70ec8acd0cd095e2eb67ebcbb0045841b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.281694 containerd[2141]: time="2025-12-16T12:46:25.281164255Z" level=error msg="Failed to destroy network for sandbox \"a5384c974ae620c5572af598000d1b4de10d6754b0bc80e7f6af8fe281fad3fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.284416 systemd[1]: run-netns-cni\x2d8510e483\x2dcb4f\x2dd5d3\x2d0a2c\x2db302f51d2f2b.mount: Deactivated successfully. Dec 16 12:46:25.292070 containerd[2141]: time="2025-12-16T12:46:25.291981150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5498459f9d-c5ctz,Uid:3442cd81-f820-467d-9da4-12b0918f9098,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b4748157437f3efa6fe14a1a45ca70ec8acd0cd095e2eb67ebcbb0045841b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.292489 kubelet[3656]: E1216 12:46:25.292264 3656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b4748157437f3efa6fe14a1a45ca70ec8acd0cd095e2eb67ebcbb0045841b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.292489 kubelet[3656]: E1216 12:46:25.292443 3656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b4748157437f3efa6fe14a1a45ca70ec8acd0cd095e2eb67ebcbb0045841b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5498459f9d-c5ctz" Dec 16 12:46:25.292489 kubelet[3656]: E1216 12:46:25.292461 3656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35b4748157437f3efa6fe14a1a45ca70ec8acd0cd095e2eb67ebcbb0045841b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5498459f9d-c5ctz" Dec 16 12:46:25.292607 kubelet[3656]: E1216 12:46:25.292508 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5498459f9d-c5ctz_calico-system(3442cd81-f820-467d-9da4-12b0918f9098)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5498459f9d-c5ctz_calico-system(3442cd81-f820-467d-9da4-12b0918f9098)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35b4748157437f3efa6fe14a1a45ca70ec8acd0cd095e2eb67ebcbb0045841b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5498459f9d-c5ctz" podUID="3442cd81-f820-467d-9da4-12b0918f9098" Dec 16 12:46:25.298541 containerd[2141]: time="2025-12-16T12:46:25.298056352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f76h8,Uid:7992b8dd-c425-405a-a189-bc8e22badaee,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5384c974ae620c5572af598000d1b4de10d6754b0bc80e7f6af8fe281fad3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.299115 kubelet[3656]: E1216 12:46:25.298862 3656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5384c974ae620c5572af598000d1b4de10d6754b0bc80e7f6af8fe281fad3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.299282 kubelet[3656]: E1216 12:46:25.299148 3656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5384c974ae620c5572af598000d1b4de10d6754b0bc80e7f6af8fe281fad3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-f76h8" Dec 16 12:46:25.299282 kubelet[3656]: E1216 12:46:25.299174 3656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5384c974ae620c5572af598000d1b4de10d6754b0bc80e7f6af8fe281fad3fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-f76h8" Dec 16 12:46:25.299282 kubelet[3656]: E1216 12:46:25.299225 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-f76h8_kube-system(7992b8dd-c425-405a-a189-bc8e22badaee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-f76h8_kube-system(7992b8dd-c425-405a-a189-bc8e22badaee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5384c974ae620c5572af598000d1b4de10d6754b0bc80e7f6af8fe281fad3fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-f76h8" podUID="7992b8dd-c425-405a-a189-bc8e22badaee" Dec 16 12:46:25.303868 containerd[2141]: time="2025-12-16T12:46:25.303703166Z" level=error msg="Failed to destroy network for sandbox \"850f895e00b6b7e9d66693810025916e5a59d9d84115e5c1a751a162ea97ce74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.307042 systemd[1]: run-netns-cni\x2d1223229e\x2dae93\x2dc8c4\x2d185a\x2d08eed7658d9a.mount: Deactivated successfully. Dec 16 12:46:25.312978 containerd[2141]: time="2025-12-16T12:46:25.312241718Z" level=error msg="Failed to destroy network for sandbox \"11aed6dfe4c560830b3b7453249c30b818cdab86536094a889ceac0456949438\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.315871 containerd[2141]: time="2025-12-16T12:46:25.313003915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8c58856c-hs58v,Uid:1c9e39e8-3a67-4975-af12-07644724165b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"850f895e00b6b7e9d66693810025916e5a59d9d84115e5c1a751a162ea97ce74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.316021 kubelet[3656]: E1216 12:46:25.315254 3656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"850f895e00b6b7e9d66693810025916e5a59d9d84115e5c1a751a162ea97ce74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.315153 systemd[1]: run-netns-cni\x2d9873fb63\x2d617c\x2d6307\x2d4efe\x2d929bee42929c.mount: Deactivated successfully. Dec 16 12:46:25.316930 kubelet[3656]: E1216 12:46:25.315310 3656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"850f895e00b6b7e9d66693810025916e5a59d9d84115e5c1a751a162ea97ce74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" Dec 16 12:46:25.316930 kubelet[3656]: E1216 12:46:25.316234 3656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"850f895e00b6b7e9d66693810025916e5a59d9d84115e5c1a751a162ea97ce74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" Dec 16 12:46:25.316930 kubelet[3656]: E1216 12:46:25.316299 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c8c58856c-hs58v_calico-system(1c9e39e8-3a67-4975-af12-07644724165b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c8c58856c-hs58v_calico-system(1c9e39e8-3a67-4975-af12-07644724165b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"850f895e00b6b7e9d66693810025916e5a59d9d84115e5c1a751a162ea97ce74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:46:25.326063 containerd[2141]: time="2025-12-16T12:46:25.325990983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8447d995cc-vpb8q,Uid:f4be582d-98bf-4dca-8981-8263274550a3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11aed6dfe4c560830b3b7453249c30b818cdab86536094a889ceac0456949438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.327236 kubelet[3656]: E1216 12:46:25.326529 3656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11aed6dfe4c560830b3b7453249c30b818cdab86536094a889ceac0456949438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.327236 kubelet[3656]: E1216 12:46:25.327190 3656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11aed6dfe4c560830b3b7453249c30b818cdab86536094a889ceac0456949438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" Dec 16 12:46:25.327236 kubelet[3656]: E1216 12:46:25.327210 3656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11aed6dfe4c560830b3b7453249c30b818cdab86536094a889ceac0456949438\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" Dec 16 12:46:25.327553 kubelet[3656]: E1216 12:46:25.327514 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8447d995cc-vpb8q_calico-apiserver(f4be582d-98bf-4dca-8981-8263274550a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8447d995cc-vpb8q_calico-apiserver(f4be582d-98bf-4dca-8981-8263274550a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11aed6dfe4c560830b3b7453249c30b818cdab86536094a889ceac0456949438\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:46:25.371299 containerd[2141]: time="2025-12-16T12:46:25.371238635Z" level=error msg="Failed to destroy network for sandbox \"113350100ca266c6188a91e5b9afd6e950d153aecbacbb713bd86fe888c92dae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.373166 systemd[1]: run-netns-cni\x2d25840622\x2d31b7\x2d6aed\x2d8ce5\x2d8831f098a857.mount: Deactivated successfully. Dec 16 12:46:25.375522 containerd[2141]: time="2025-12-16T12:46:25.375236227Z" level=error msg="Failed to destroy network for sandbox \"8a236d25d3f541a05038ecfe7aa76eec58ea2c663509edbc55787f3ecb571b6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.378199 containerd[2141]: time="2025-12-16T12:46:25.378156125Z" level=error msg="Failed to destroy network for sandbox \"8850b798e36e6f157f9c2a476f6ff8a3912d84e11640de3ca1633463a4f83b52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.379130 containerd[2141]: time="2025-12-16T12:46:25.379088759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9rsw,Uid:cf72c120-6b19-4407-a659-b4a889422882,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"113350100ca266c6188a91e5b9afd6e950d153aecbacbb713bd86fe888c92dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.379354 kubelet[3656]: E1216 12:46:25.379317 3656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"113350100ca266c6188a91e5b9afd6e950d153aecbacbb713bd86fe888c92dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.379414 kubelet[3656]: E1216 12:46:25.379380 3656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"113350100ca266c6188a91e5b9afd6e950d153aecbacbb713bd86fe888c92dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v9rsw" Dec 16 12:46:25.379414 kubelet[3656]: E1216 12:46:25.379396 3656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"113350100ca266c6188a91e5b9afd6e950d153aecbacbb713bd86fe888c92dae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v9rsw" Dec 16 12:46:25.379451 kubelet[3656]: E1216 12:46:25.379429 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v9rsw_kube-system(cf72c120-6b19-4407-a659-b4a889422882)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v9rsw_kube-system(cf72c120-6b19-4407-a659-b4a889422882)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"113350100ca266c6188a91e5b9afd6e950d153aecbacbb713bd86fe888c92dae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v9rsw" podUID="cf72c120-6b19-4407-a659-b4a889422882" Dec 16 12:46:25.387949 containerd[2141]: time="2025-12-16T12:46:25.387804547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8447d995cc-x57ls,Uid:c9372ebb-481a-480c-8bf1-ba7918503e79,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a236d25d3f541a05038ecfe7aa76eec58ea2c663509edbc55787f3ecb571b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.388289 kubelet[3656]: E1216 12:46:25.388245 3656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a236d25d3f541a05038ecfe7aa76eec58ea2c663509edbc55787f3ecb571b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.388350 kubelet[3656]: E1216 12:46:25.388312 3656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a236d25d3f541a05038ecfe7aa76eec58ea2c663509edbc55787f3ecb571b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" Dec 16 12:46:25.388350 kubelet[3656]: E1216 12:46:25.388329 3656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a236d25d3f541a05038ecfe7aa76eec58ea2c663509edbc55787f3ecb571b6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" Dec 16 12:46:25.389543 kubelet[3656]: E1216 12:46:25.389286 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8447d995cc-x57ls_calico-apiserver(c9372ebb-481a-480c-8bf1-ba7918503e79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8447d995cc-x57ls_calico-apiserver(c9372ebb-481a-480c-8bf1-ba7918503e79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a236d25d3f541a05038ecfe7aa76eec58ea2c663509edbc55787f3ecb571b6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:46:25.390645 containerd[2141]: time="2025-12-16T12:46:25.390600705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-knbcd,Uid:50d00cc7-1203-4290-806c-1437385334b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8850b798e36e6f157f9c2a476f6ff8a3912d84e11640de3ca1633463a4f83b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.390895 kubelet[3656]: E1216 12:46:25.390866 3656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8850b798e36e6f157f9c2a476f6ff8a3912d84e11640de3ca1633463a4f83b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.390943 kubelet[3656]: E1216 12:46:25.390908 3656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8850b798e36e6f157f9c2a476f6ff8a3912d84e11640de3ca1633463a4f83b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-knbcd" Dec 16 12:46:25.390943 kubelet[3656]: E1216 12:46:25.390927 3656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8850b798e36e6f157f9c2a476f6ff8a3912d84e11640de3ca1633463a4f83b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-knbcd" Dec 16 12:46:25.391036 kubelet[3656]: E1216 12:46:25.390959 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-knbcd_calico-system(50d00cc7-1203-4290-806c-1437385334b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-knbcd_calico-system(50d00cc7-1203-4290-806c-1437385334b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8850b798e36e6f157f9c2a476f6ff8a3912d84e11640de3ca1633463a4f83b52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:46:25.570338 systemd[1]: Created slice kubepods-besteffort-podc8d2f0f9_d4bf_424e_80b4_888570287c6a.slice - libcontainer container kubepods-besteffort-podc8d2f0f9_d4bf_424e_80b4_888570287c6a.slice. Dec 16 12:46:25.573903 containerd[2141]: time="2025-12-16T12:46:25.573855073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q9ldj,Uid:c8d2f0f9-d4bf-424e-80b4-888570287c6a,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:25.620447 containerd[2141]: time="2025-12-16T12:46:25.620369264Z" level=error msg="Failed to destroy network for sandbox \"430bd70d595c736cd2b44b98667803377e49f33ffc3159e2c309a0e0e4c75c38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.628498 containerd[2141]: time="2025-12-16T12:46:25.628358912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q9ldj,Uid:c8d2f0f9-d4bf-424e-80b4-888570287c6a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"430bd70d595c736cd2b44b98667803377e49f33ffc3159e2c309a0e0e4c75c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.629149 kubelet[3656]: E1216 12:46:25.628639 3656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"430bd70d595c736cd2b44b98667803377e49f33ffc3159e2c309a0e0e4c75c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:46:25.629149 kubelet[3656]: E1216 12:46:25.628700 3656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"430bd70d595c736cd2b44b98667803377e49f33ffc3159e2c309a0e0e4c75c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q9ldj" Dec 16 12:46:25.629149 kubelet[3656]: E1216 12:46:25.628723 3656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"430bd70d595c736cd2b44b98667803377e49f33ffc3159e2c309a0e0e4c75c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q9ldj" Dec 16 12:46:25.629244 kubelet[3656]: E1216 12:46:25.628766 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"430bd70d595c736cd2b44b98667803377e49f33ffc3159e2c309a0e0e4c75c38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:46:25.693907 containerd[2141]: time="2025-12-16T12:46:25.693483561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:46:26.257887 systemd[1]: run-netns-cni\x2d417e083a\x2d4704\x2d8315\x2d15d8\x2df3876a8edca1.mount: Deactivated successfully. Dec 16 12:46:26.258026 systemd[1]: run-netns-cni\x2df3d5f448\x2db7fb\x2da441\x2ddf3c\x2db211313c31e8.mount: Deactivated successfully. Dec 16 12:46:29.520981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount442634739.mount: Deactivated successfully. Dec 16 12:46:29.837175 containerd[2141]: time="2025-12-16T12:46:29.836560959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:29.840158 containerd[2141]: time="2025-12-16T12:46:29.840104018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 12:46:29.882261 containerd[2141]: time="2025-12-16T12:46:29.882173885Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:29.887236 containerd[2141]: time="2025-12-16T12:46:29.887175193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:46:29.887531 containerd[2141]: time="2025-12-16T12:46:29.887493234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.193957296s" Dec 16 12:46:29.887531 containerd[2141]: time="2025-12-16T12:46:29.887526763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:46:29.900389 containerd[2141]: time="2025-12-16T12:46:29.900325762Z" level=info msg="CreateContainer within sandbox \"4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:46:29.930472 containerd[2141]: time="2025-12-16T12:46:29.928832289Z" level=info msg="Container 9c89875cfe51e26e0ee5a4cbb66b9c710722ec9a0f0286b0336ee0db01cac20d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:29.962324 containerd[2141]: time="2025-12-16T12:46:29.962232961Z" level=info msg="CreateContainer within sandbox \"4ac1f0a0cfd50d75e4c4bacb9b24f409be450d1ebe4100264f9afe9e9a5da35b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9c89875cfe51e26e0ee5a4cbb66b9c710722ec9a0f0286b0336ee0db01cac20d\"" Dec 16 12:46:29.963264 containerd[2141]: time="2025-12-16T12:46:29.963218044Z" level=info msg="StartContainer for \"9c89875cfe51e26e0ee5a4cbb66b9c710722ec9a0f0286b0336ee0db01cac20d\"" Dec 16 12:46:29.964839 containerd[2141]: time="2025-12-16T12:46:29.964720014Z" level=info msg="connecting to shim 9c89875cfe51e26e0ee5a4cbb66b9c710722ec9a0f0286b0336ee0db01cac20d" address="unix:///run/containerd/s/8ec3e0ae93bebcfca6b9fac8015acda88d75aa08bfbf4524787d6a5549a366e0" protocol=ttrpc version=3 Dec 16 12:46:29.981268 systemd[1]: Started cri-containerd-9c89875cfe51e26e0ee5a4cbb66b9c710722ec9a0f0286b0336ee0db01cac20d.scope - libcontainer container 9c89875cfe51e26e0ee5a4cbb66b9c710722ec9a0f0286b0336ee0db01cac20d. Dec 16 12:46:30.031000 audit: BPF prog-id=196 op=LOAD Dec 16 12:46:30.031000 audit[4695]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4146 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.056982 kernel: audit: type=1334 audit(1765889190.031:600): prog-id=196 op=LOAD Dec 16 12:46:30.057152 kernel: audit: type=1300 audit(1765889190.031:600): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4146 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383938373563666535316532366530656535613463626236366239 Dec 16 12:46:30.078965 kernel: audit: type=1327 audit(1765889190.031:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383938373563666535316532366530656535613463626236366239 Dec 16 12:46:30.031000 audit: BPF prog-id=197 op=LOAD Dec 16 12:46:30.085185 kernel: audit: type=1334 audit(1765889190.031:601): prog-id=197 op=LOAD Dec 16 12:46:30.031000 audit[4695]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4146 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.105574 kernel: audit: type=1300 audit(1765889190.031:601): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4146 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383938373563666535316532366530656535613463626236366239 Dec 16 12:46:30.123211 kernel: audit: type=1327 audit(1765889190.031:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383938373563666535316532366530656535613463626236366239 Dec 16 12:46:30.036000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:46:30.128743 kernel: audit: type=1334 audit(1765889190.036:602): prog-id=197 op=UNLOAD Dec 16 12:46:30.036000 audit[4695]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.146038 kernel: audit: type=1300 audit(1765889190.036:602): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383938373563666535316532366530656535613463626236366239 Dec 16 12:46:30.165244 kernel: audit: type=1327 audit(1765889190.036:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383938373563666535316532366530656535613463626236366239 Dec 16 12:46:30.036000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:46:30.170359 kernel: audit: type=1334 audit(1765889190.036:603): prog-id=196 op=UNLOAD Dec 16 12:46:30.036000 audit[4695]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4146 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383938373563666535316532366530656535613463626236366239 Dec 16 12:46:30.036000 audit: BPF prog-id=198 op=LOAD Dec 16 12:46:30.036000 audit[4695]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4146 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:30.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963383938373563666535316532366530656535613463626236366239 Dec 16 12:46:30.183014 containerd[2141]: time="2025-12-16T12:46:30.182972626Z" level=info msg="StartContainer for \"9c89875cfe51e26e0ee5a4cbb66b9c710722ec9a0f0286b0336ee0db01cac20d\" returns successfully" Dec 16 12:46:30.396944 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:46:30.397112 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:46:30.624151 kubelet[3656]: I1216 12:46:30.623922 3656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3442cd81-f820-467d-9da4-12b0918f9098-whisker-backend-key-pair\") pod \"3442cd81-f820-467d-9da4-12b0918f9098\" (UID: \"3442cd81-f820-467d-9da4-12b0918f9098\") " Dec 16 12:46:30.624151 kubelet[3656]: I1216 12:46:30.623967 3656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3442cd81-f820-467d-9da4-12b0918f9098-whisker-ca-bundle\") pod \"3442cd81-f820-467d-9da4-12b0918f9098\" (UID: \"3442cd81-f820-467d-9da4-12b0918f9098\") " Dec 16 12:46:30.624151 kubelet[3656]: I1216 12:46:30.623993 3656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69hpx\" (UniqueName: \"kubernetes.io/projected/3442cd81-f820-467d-9da4-12b0918f9098-kube-api-access-69hpx\") pod \"3442cd81-f820-467d-9da4-12b0918f9098\" (UID: \"3442cd81-f820-467d-9da4-12b0918f9098\") " Dec 16 12:46:30.628016 systemd[1]: var-lib-kubelet-pods-3442cd81\x2df820\x2d467d\x2d9da4\x2d12b0918f9098-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d69hpx.mount: Deactivated successfully. Dec 16 12:46:30.633247 kubelet[3656]: I1216 12:46:30.633137 3656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3442cd81-f820-467d-9da4-12b0918f9098-kube-api-access-69hpx" (OuterVolumeSpecName: "kube-api-access-69hpx") pod "3442cd81-f820-467d-9da4-12b0918f9098" (UID: "3442cd81-f820-467d-9da4-12b0918f9098"). InnerVolumeSpecName "kube-api-access-69hpx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:46:30.634606 kubelet[3656]: I1216 12:46:30.633929 3656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3442cd81-f820-467d-9da4-12b0918f9098-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3442cd81-f820-467d-9da4-12b0918f9098" (UID: "3442cd81-f820-467d-9da4-12b0918f9098"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:46:30.638900 kubelet[3656]: I1216 12:46:30.638604 3656 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3442cd81-f820-467d-9da4-12b0918f9098-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3442cd81-f820-467d-9da4-12b0918f9098" (UID: "3442cd81-f820-467d-9da4-12b0918f9098"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:46:30.639878 systemd[1]: var-lib-kubelet-pods-3442cd81\x2df820\x2d467d\x2d9da4\x2d12b0918f9098-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:46:30.721825 systemd[1]: Removed slice kubepods-besteffort-pod3442cd81_f820_467d_9da4_12b0918f9098.slice - libcontainer container kubepods-besteffort-pod3442cd81_f820_467d_9da4_12b0918f9098.slice. Dec 16 12:46:30.725340 kubelet[3656]: I1216 12:46:30.724973 3656 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3442cd81-f820-467d-9da4-12b0918f9098-whisker-backend-key-pair\") on node \"ci-4515.1.0-a-a4975b77c5\" DevicePath \"\"" Dec 16 12:46:30.725340 kubelet[3656]: I1216 12:46:30.725005 3656 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3442cd81-f820-467d-9da4-12b0918f9098-whisker-ca-bundle\") on node \"ci-4515.1.0-a-a4975b77c5\" DevicePath \"\"" Dec 16 12:46:30.725340 kubelet[3656]: I1216 12:46:30.725016 3656 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-69hpx\" (UniqueName: \"kubernetes.io/projected/3442cd81-f820-467d-9da4-12b0918f9098-kube-api-access-69hpx\") on node \"ci-4515.1.0-a-a4975b77c5\" DevicePath \"\"" Dec 16 12:46:30.752088 kubelet[3656]: I1216 12:46:30.751950 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x9mq7" podStartSLOduration=1.741733129 podStartE2EDuration="15.75193233s" podCreationTimestamp="2025-12-16 12:46:15 +0000 UTC" firstStartedPulling="2025-12-16 12:46:15.878325958 +0000 UTC m=+22.386979620" lastFinishedPulling="2025-12-16 12:46:29.888525151 +0000 UTC m=+36.397178821" observedRunningTime="2025-12-16 12:46:30.748977272 +0000 UTC m=+37.257630934" watchObservedRunningTime="2025-12-16 12:46:30.75193233 +0000 UTC m=+37.260585992" Dec 16 12:46:30.822283 systemd[1]: Created slice kubepods-besteffort-pod7003c08f_2a9c_4fb5_8691_d2bf3d7c9d21.slice - libcontainer container kubepods-besteffort-pod7003c08f_2a9c_4fb5_8691_d2bf3d7c9d21.slice. Dec 16 12:46:30.926784 kubelet[3656]: I1216 12:46:30.926637 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvdr8\" (UniqueName: \"kubernetes.io/projected/7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21-kube-api-access-kvdr8\") pod \"whisker-fcd869d9b-bwx78\" (UID: \"7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21\") " pod="calico-system/whisker-fcd869d9b-bwx78" Dec 16 12:46:30.927280 kubelet[3656]: I1216 12:46:30.927194 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21-whisker-backend-key-pair\") pod \"whisker-fcd869d9b-bwx78\" (UID: \"7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21\") " pod="calico-system/whisker-fcd869d9b-bwx78" Dec 16 12:46:30.927280 kubelet[3656]: I1216 12:46:30.927235 3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21-whisker-ca-bundle\") pod \"whisker-fcd869d9b-bwx78\" (UID: \"7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21\") " pod="calico-system/whisker-fcd869d9b-bwx78" Dec 16 12:46:31.128246 containerd[2141]: time="2025-12-16T12:46:31.127985541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fcd869d9b-bwx78,Uid:7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:31.257205 systemd-networkd[1724]: calicfcb039da07: Link UP Dec 16 12:46:31.257920 systemd-networkd[1724]: calicfcb039da07: Gained carrier Dec 16 12:46:31.275804 containerd[2141]: 2025-12-16 12:46:31.154 [INFO][4781] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:31.275804 containerd[2141]: 2025-12-16 12:46:31.190 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0 whisker-fcd869d9b- calico-system 7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21 856 0 2025-12-16 12:46:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fcd869d9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-a-a4975b77c5 whisker-fcd869d9b-bwx78 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicfcb039da07 [] [] }} ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Namespace="calico-system" Pod="whisker-fcd869d9b-bwx78" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-" Dec 16 12:46:31.275804 containerd[2141]: 2025-12-16 12:46:31.190 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Namespace="calico-system" Pod="whisker-fcd869d9b-bwx78" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" Dec 16 12:46:31.275804 containerd[2141]: 2025-12-16 12:46:31.211 [INFO][4792] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" HandleID="k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Workload="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.211 [INFO][4792] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" HandleID="k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Workload="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-a4975b77c5", "pod":"whisker-fcd869d9b-bwx78", "timestamp":"2025-12-16 12:46:31.211283835 +0000 UTC"}, Hostname:"ci-4515.1.0-a-a4975b77c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.211 [INFO][4792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.211 [INFO][4792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.211 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-a4975b77c5' Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.217 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.221 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.225 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.227 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276078 containerd[2141]: 2025-12-16 12:46:31.229 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276611 containerd[2141]: 2025-12-16 12:46:31.229 [INFO][4792] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276611 containerd[2141]: 2025-12-16 12:46:31.230 [INFO][4792] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842 Dec 16 12:46:31.276611 containerd[2141]: 2025-12-16 12:46:31.236 [INFO][4792] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276611 containerd[2141]: 2025-12-16 12:46:31.246 [INFO][4792] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.49.1/26] block=192.168.49.0/26 handle="k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276611 containerd[2141]: 2025-12-16 12:46:31.246 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.1/26] handle="k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:31.276611 containerd[2141]: 2025-12-16 12:46:31.246 [INFO][4792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:31.276611 containerd[2141]: 2025-12-16 12:46:31.247 [INFO][4792] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.49.1/26] IPv6=[] ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" HandleID="k8s-pod-network.dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Workload="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" Dec 16 12:46:31.276823 containerd[2141]: 2025-12-16 12:46:31.250 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Namespace="calico-system" Pod="whisker-fcd869d9b-bwx78" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0", GenerateName:"whisker-fcd869d9b-", Namespace:"calico-system", SelfLink:"", UID:"7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fcd869d9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"", Pod:"whisker-fcd869d9b-bwx78", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicfcb039da07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:31.276823 containerd[2141]: 2025-12-16 12:46:31.250 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.1/32] ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Namespace="calico-system" Pod="whisker-fcd869d9b-bwx78" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" Dec 16 12:46:31.276905 containerd[2141]: 2025-12-16 12:46:31.250 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfcb039da07 ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Namespace="calico-system" Pod="whisker-fcd869d9b-bwx78" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" Dec 16 12:46:31.276905 containerd[2141]: 2025-12-16 12:46:31.259 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Namespace="calico-system" Pod="whisker-fcd869d9b-bwx78" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" Dec 16 12:46:31.276967 containerd[2141]: 2025-12-16 12:46:31.259 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Namespace="calico-system" Pod="whisker-fcd869d9b-bwx78" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0", GenerateName:"whisker-fcd869d9b-", Namespace:"calico-system", SelfLink:"", UID:"7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fcd869d9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842", Pod:"whisker-fcd869d9b-bwx78", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicfcb039da07", MAC:"0e:eb:20:ee:60:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:31.277012 containerd[2141]: 2025-12-16 12:46:31.273 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" Namespace="calico-system" Pod="whisker-fcd869d9b-bwx78" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-whisker--fcd869d9b--bwx78-eth0" Dec 16 12:46:31.315660 containerd[2141]: time="2025-12-16T12:46:31.315607246Z" level=info msg="connecting to shim dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842" address="unix:///run/containerd/s/66f167abd3fc8221173df68cf4727fe7d44b8435d858bd5586c031f058f5c395" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:31.335291 systemd[1]: Started cri-containerd-dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842.scope - libcontainer container dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842. Dec 16 12:46:31.344000 audit: BPF prog-id=199 op=LOAD Dec 16 12:46:31.345000 audit: BPF prog-id=200 op=LOAD Dec 16 12:46:31.345000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4816 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463646136613433383565393566376439653338386161386262653238 Dec 16 12:46:31.345000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:46:31.345000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463646136613433383565393566376439653338386161386262653238 Dec 16 12:46:31.345000 audit: BPF prog-id=201 op=LOAD Dec 16 12:46:31.345000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4816 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463646136613433383565393566376439653338386161386262653238 Dec 16 12:46:31.345000 audit: BPF prog-id=202 op=LOAD Dec 16 12:46:31.345000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4816 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463646136613433383565393566376439653338386161386262653238 Dec 16 12:46:31.345000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:46:31.345000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463646136613433383565393566376439653338386161386262653238 Dec 16 12:46:31.345000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:46:31.345000 audit[4827]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4816 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463646136613433383565393566376439653338386161386262653238 Dec 16 12:46:31.346000 audit: BPF prog-id=203 op=LOAD Dec 16 12:46:31.346000 audit[4827]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4816 pid=4827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:31.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463646136613433383565393566376439653338386161386262653238 Dec 16 12:46:31.371656 containerd[2141]: time="2025-12-16T12:46:31.371618216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fcd869d9b-bwx78,Uid:7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcda6a4385e95f7d9e388aa8bbe28e2d1b67804eff06f61b3ad8569885198842\"" Dec 16 12:46:31.373295 containerd[2141]: time="2025-12-16T12:46:31.373265846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:31.566404 kubelet[3656]: I1216 12:46:31.566358 3656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3442cd81-f820-467d-9da4-12b0918f9098" path="/var/lib/kubelet/pods/3442cd81-f820-467d-9da4-12b0918f9098/volumes" Dec 16 12:46:31.657234 containerd[2141]: time="2025-12-16T12:46:31.657175618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:31.660368 containerd[2141]: time="2025-12-16T12:46:31.660248032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:31.660473 containerd[2141]: time="2025-12-16T12:46:31.660367019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:31.660568 kubelet[3656]: E1216 12:46:31.660520 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:31.660872 kubelet[3656]: E1216 12:46:31.660581 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:31.661649 kubelet[3656]: E1216 12:46:31.661607 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a24d6b5f1b4428ea6052e9fe2a904c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvdr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-fcd869d9b-bwx78_calico-system(7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:31.663831 containerd[2141]: time="2025-12-16T12:46:31.663615630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:31.914836 containerd[2141]: time="2025-12-16T12:46:31.914681274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:32.132602 containerd[2141]: time="2025-12-16T12:46:32.132524467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:32.132602 containerd[2141]: time="2025-12-16T12:46:32.132561108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:32.133161 kubelet[3656]: E1216 12:46:32.133055 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:32.133231 kubelet[3656]: E1216 12:46:32.133165 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:32.134133 kubelet[3656]: E1216 12:46:32.133670 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvdr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-fcd869d9b-bwx78_calico-system(7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:32.135053 kubelet[3656]: E1216 12:46:32.134949 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:46:32.718219 systemd-networkd[1724]: calicfcb039da07: Gained IPv6LL Dec 16 12:46:32.721727 kubelet[3656]: E1216 12:46:32.721635 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:46:32.760000 audit[4968]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:32.760000 audit[4968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd38e96b0 a2=0 a3=1 items=0 ppid=3794 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:32.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:32.765000 audit[4968]: NETFILTER_CFG table=nat:123 family=2 entries=12 op=nft_register_rule pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:32.765000 audit[4968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd38e96b0 a2=0 a3=1 items=0 ppid=3794 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:32.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:37.565004 containerd[2141]: time="2025-12-16T12:46:37.564951936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8447d995cc-x57ls,Uid:c9372ebb-481a-480c-8bf1-ba7918503e79,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:37.761165 systemd-networkd[1724]: cali0ace080bc07: Link UP Dec 16 12:46:37.761769 systemd-networkd[1724]: cali0ace080bc07: Gained carrier Dec 16 12:46:37.778440 containerd[2141]: 2025-12-16 12:46:37.691 [INFO][5075] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:37.778440 containerd[2141]: 2025-12-16 12:46:37.701 [INFO][5075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0 calico-apiserver-8447d995cc- calico-apiserver c9372ebb-481a-480c-8bf1-ba7918503e79 796 0 2025-12-16 12:46:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8447d995cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-a4975b77c5 calico-apiserver-8447d995cc-x57ls eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ace080bc07 [] [] }} ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-x57ls" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-" Dec 16 12:46:37.778440 containerd[2141]: 2025-12-16 12:46:37.701 [INFO][5075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-x57ls" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" Dec 16 12:46:37.778440 containerd[2141]: 2025-12-16 12:46:37.721 [INFO][5087] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" HandleID="k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.721 [INFO][5087] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" HandleID="k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-a4975b77c5", "pod":"calico-apiserver-8447d995cc-x57ls", "timestamp":"2025-12-16 12:46:37.721672967 +0000 UTC"}, Hostname:"ci-4515.1.0-a-a4975b77c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.721 [INFO][5087] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.722 [INFO][5087] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.722 [INFO][5087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-a4975b77c5' Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.728 [INFO][5087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.733 [INFO][5087] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.737 [INFO][5087] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.739 [INFO][5087] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.778840 containerd[2141]: 2025-12-16 12:46:37.740 [INFO][5087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.779045 containerd[2141]: 2025-12-16 12:46:37.740 [INFO][5087] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.779045 containerd[2141]: 2025-12-16 12:46:37.742 [INFO][5087] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432 Dec 16 12:46:37.779045 containerd[2141]: 2025-12-16 12:46:37.747 [INFO][5087] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.779045 containerd[2141]: 2025-12-16 12:46:37.756 [INFO][5087] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.49.2/26] block=192.168.49.0/26 handle="k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.779045 containerd[2141]: 2025-12-16 12:46:37.756 [INFO][5087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.2/26] handle="k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:37.779045 containerd[2141]: 2025-12-16 12:46:37.756 [INFO][5087] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:37.779045 containerd[2141]: 2025-12-16 12:46:37.756 [INFO][5087] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.49.2/26] IPv6=[] ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" HandleID="k8s-pod-network.1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" Dec 16 12:46:37.779360 containerd[2141]: 2025-12-16 12:46:37.758 [INFO][5075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-x57ls" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0", GenerateName:"calico-apiserver-8447d995cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9372ebb-481a-480c-8bf1-ba7918503e79", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8447d995cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"", Pod:"calico-apiserver-8447d995cc-x57ls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ace080bc07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:37.779405 containerd[2141]: 2025-12-16 12:46:37.759 [INFO][5075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.2/32] ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-x57ls" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" Dec 16 12:46:37.779405 containerd[2141]: 2025-12-16 12:46:37.759 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ace080bc07 ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-x57ls" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" Dec 16 12:46:37.779405 containerd[2141]: 2025-12-16 12:46:37.761 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-x57ls" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" Dec 16 12:46:37.779578 containerd[2141]: 2025-12-16 12:46:37.762 [INFO][5075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-x57ls" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0", GenerateName:"calico-apiserver-8447d995cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9372ebb-481a-480c-8bf1-ba7918503e79", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8447d995cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432", Pod:"calico-apiserver-8447d995cc-x57ls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ace080bc07", MAC:"e6:1d:9d:49:c5:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:37.779661 containerd[2141]: 2025-12-16 12:46:37.775 [INFO][5075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-x57ls" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--x57ls-eth0" Dec 16 12:46:38.236599 containerd[2141]: time="2025-12-16T12:46:38.236514605Z" level=info msg="connecting to shim 1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432" address="unix:///run/containerd/s/d8c24de0c8ca77589820b15c4442bf755450646efc8ee9a23dcee7f2bb74b2c3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:38.258298 systemd[1]: Started cri-containerd-1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432.scope - libcontainer container 1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432. Dec 16 12:46:38.267000 audit: BPF prog-id=204 op=LOAD Dec 16 12:46:38.271669 kernel: kauditd_printk_skb: 33 callbacks suppressed Dec 16 12:46:38.271770 kernel: audit: type=1334 audit(1765889198.267:615): prog-id=204 op=LOAD Dec 16 12:46:38.282833 kernel: audit: type=1334 audit(1765889198.270:616): prog-id=205 op=LOAD Dec 16 12:46:38.270000 audit: BPF prog-id=205 op=LOAD Dec 16 12:46:38.270000 audit[5122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.305519 kernel: audit: type=1300 audit(1765889198.270:616): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.327046 kernel: audit: type=1327 audit(1765889198.270:616): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.327168 kernel: audit: type=1334 audit(1765889198.275:617): prog-id=205 op=UNLOAD Dec 16 12:46:38.275000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:46:38.275000 audit[5122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.350728 kernel: audit: type=1300 audit(1765889198.275:617): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.275000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.373185 kernel: audit: type=1327 audit(1765889198.275:617): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.276000 audit: BPF prog-id=206 op=LOAD Dec 16 12:46:38.384703 kernel: audit: type=1334 audit(1765889198.276:618): prog-id=206 op=LOAD Dec 16 12:46:38.276000 audit[5122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.402848 kernel: audit: type=1300 audit(1765889198.276:618): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.423318 kernel: audit: type=1327 audit(1765889198.276:618): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.276000 audit: BPF prog-id=207 op=LOAD Dec 16 12:46:38.276000 audit[5122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.276000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:46:38.276000 audit[5122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.276000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:46:38.276000 audit[5122]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.276000 audit: BPF prog-id=208 op=LOAD Dec 16 12:46:38.276000 audit[5122]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5111 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:38.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164323134393839636464343161303362326432633265613539303335 Dec 16 12:46:38.431691 containerd[2141]: time="2025-12-16T12:46:38.431621166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8447d995cc-x57ls,Uid:c9372ebb-481a-480c-8bf1-ba7918503e79,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1d214989cdd41a03b2d2c2ea590356e6bece515673bd09f18c8102bfce81e432\"" Dec 16 12:46:38.434123 containerd[2141]: time="2025-12-16T12:46:38.433537593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:38.565109 containerd[2141]: time="2025-12-16T12:46:38.564891569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f76h8,Uid:7992b8dd-c425-405a-a189-bc8e22badaee,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:38.565473 containerd[2141]: time="2025-12-16T12:46:38.565000492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q9ldj,Uid:c8d2f0f9-d4bf-424e-80b4-888570287c6a,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:38.713272 systemd-networkd[1724]: caliea420887185: Link UP Dec 16 12:46:38.713800 systemd-networkd[1724]: caliea420887185: Gained carrier Dec 16 12:46:38.720766 containerd[2141]: time="2025-12-16T12:46:38.720724492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:38.738428 containerd[2141]: 2025-12-16 12:46:38.642 [INFO][5167] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:38.738428 containerd[2141]: 2025-12-16 12:46:38.651 [INFO][5167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0 coredns-668d6bf9bc- kube-system 7992b8dd-c425-405a-a189-bc8e22badaee 797 0 2025-12-16 12:45:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-a4975b77c5 coredns-668d6bf9bc-f76h8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliea420887185 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Namespace="kube-system" Pod="coredns-668d6bf9bc-f76h8" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-" Dec 16 12:46:38.738428 containerd[2141]: 2025-12-16 12:46:38.651 [INFO][5167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Namespace="kube-system" Pod="coredns-668d6bf9bc-f76h8" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" Dec 16 12:46:38.738428 containerd[2141]: 2025-12-16 12:46:38.670 [INFO][5180] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" HandleID="k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Workload="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.670 [INFO][5180] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" HandleID="k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Workload="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-a4975b77c5", "pod":"coredns-668d6bf9bc-f76h8", "timestamp":"2025-12-16 12:46:38.670617531 +0000 UTC"}, Hostname:"ci-4515.1.0-a-a4975b77c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.670 [INFO][5180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.670 [INFO][5180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.670 [INFO][5180] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-a4975b77c5' Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.679 [INFO][5180] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.683 [INFO][5180] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.687 [INFO][5180] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.689 [INFO][5180] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.738838 containerd[2141]: 2025-12-16 12:46:38.691 [INFO][5180] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.739674 containerd[2141]: 2025-12-16 12:46:38.691 [INFO][5180] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.739674 containerd[2141]: 2025-12-16 12:46:38.692 [INFO][5180] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09 Dec 16 12:46:38.739674 containerd[2141]: 2025-12-16 12:46:38.697 [INFO][5180] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.739674 containerd[2141]: 2025-12-16 12:46:38.707 [INFO][5180] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.49.3/26] block=192.168.49.0/26 handle="k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.739674 containerd[2141]: 2025-12-16 12:46:38.707 [INFO][5180] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.3/26] handle="k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.739674 containerd[2141]: 2025-12-16 12:46:38.707 [INFO][5180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:38.739674 containerd[2141]: 2025-12-16 12:46:38.707 [INFO][5180] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.49.3/26] IPv6=[] ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" HandleID="k8s-pod-network.fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Workload="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" Dec 16 12:46:38.739787 containerd[2141]: 2025-12-16 12:46:38.709 [INFO][5167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Namespace="kube-system" Pod="coredns-668d6bf9bc-f76h8" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7992b8dd-c425-405a-a189-bc8e22badaee", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"", Pod:"coredns-668d6bf9bc-f76h8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea420887185", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:38.739787 containerd[2141]: 2025-12-16 12:46:38.709 [INFO][5167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.3/32] ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Namespace="kube-system" Pod="coredns-668d6bf9bc-f76h8" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" Dec 16 12:46:38.739787 containerd[2141]: 2025-12-16 12:46:38.709 [INFO][5167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea420887185 ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Namespace="kube-system" Pod="coredns-668d6bf9bc-f76h8" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" Dec 16 12:46:38.739787 containerd[2141]: 2025-12-16 12:46:38.714 [INFO][5167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Namespace="kube-system" Pod="coredns-668d6bf9bc-f76h8" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" Dec 16 12:46:38.739787 containerd[2141]: 2025-12-16 12:46:38.714 [INFO][5167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Namespace="kube-system" Pod="coredns-668d6bf9bc-f76h8" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7992b8dd-c425-405a-a189-bc8e22badaee", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09", Pod:"coredns-668d6bf9bc-f76h8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea420887185", MAC:"0e:01:39:ea:4e:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:38.739787 containerd[2141]: 2025-12-16 12:46:38.735 [INFO][5167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" Namespace="kube-system" Pod="coredns-668d6bf9bc-f76h8" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--f76h8-eth0" Dec 16 12:46:38.779149 containerd[2141]: time="2025-12-16T12:46:38.778652468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:38.779403 containerd[2141]: time="2025-12-16T12:46:38.779364762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:38.779773 kubelet[3656]: E1216 12:46:38.779730 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:38.780214 kubelet[3656]: E1216 12:46:38.779783 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:38.781665 kubelet[3656]: E1216 12:46:38.781570 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg9nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8447d995cc-x57ls_calico-apiserver(c9372ebb-481a-480c-8bf1-ba7918503e79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:38.782797 kubelet[3656]: E1216 12:46:38.782735 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:46:38.825635 systemd-networkd[1724]: cali4ca742eeae5: Link UP Dec 16 12:46:38.826201 systemd-networkd[1724]: cali4ca742eeae5: Gained carrier Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.753 [INFO][5188] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.762 [INFO][5188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0 csi-node-driver- calico-system c8d2f0f9-d4bf-424e-80b4-888570287c6a 689 0 2025-12-16 12:46:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-a-a4975b77c5 csi-node-driver-q9ldj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4ca742eeae5 [] [] }} ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Namespace="calico-system" Pod="csi-node-driver-q9ldj" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.762 [INFO][5188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Namespace="calico-system" Pod="csi-node-driver-q9ldj" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.783 [INFO][5206] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" HandleID="k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Workload="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.784 [INFO][5206] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" HandleID="k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Workload="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b060), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-a4975b77c5", "pod":"csi-node-driver-q9ldj", "timestamp":"2025-12-16 12:46:38.783959118 +0000 UTC"}, Hostname:"ci-4515.1.0-a-a4975b77c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.784 [INFO][5206] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.785 [INFO][5206] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.785 [INFO][5206] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-a4975b77c5' Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.793 [INFO][5206] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.797 [INFO][5206] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.802 [INFO][5206] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.803 [INFO][5206] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.805 [INFO][5206] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.805 [INFO][5206] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.807 [INFO][5206] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03 Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.811 [INFO][5206] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.820 [INFO][5206] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.49.4/26] block=192.168.49.0/26 handle="k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.820 [INFO][5206] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.4/26] handle="k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.820 [INFO][5206] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:38.846619 containerd[2141]: 2025-12-16 12:46:38.820 [INFO][5206] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.49.4/26] IPv6=[] ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" HandleID="k8s-pod-network.e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Workload="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" Dec 16 12:46:38.847434 containerd[2141]: 2025-12-16 12:46:38.821 [INFO][5188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Namespace="calico-system" Pod="csi-node-driver-q9ldj" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c8d2f0f9-d4bf-424e-80b4-888570287c6a", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"", Pod:"csi-node-driver-q9ldj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4ca742eeae5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:38.847434 containerd[2141]: 2025-12-16 12:46:38.821 [INFO][5188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.4/32] ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Namespace="calico-system" Pod="csi-node-driver-q9ldj" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" Dec 16 12:46:38.847434 containerd[2141]: 2025-12-16 12:46:38.821 [INFO][5188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ca742eeae5 ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Namespace="calico-system" Pod="csi-node-driver-q9ldj" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" Dec 16 12:46:38.847434 containerd[2141]: 2025-12-16 12:46:38.827 [INFO][5188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Namespace="calico-system" Pod="csi-node-driver-q9ldj" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" Dec 16 12:46:38.847434 containerd[2141]: 2025-12-16 12:46:38.829 [INFO][5188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Namespace="calico-system" Pod="csi-node-driver-q9ldj" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c8d2f0f9-d4bf-424e-80b4-888570287c6a", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03", Pod:"csi-node-driver-q9ldj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4ca742eeae5", MAC:"ca:d7:b5:b8:a4:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:38.847434 containerd[2141]: 2025-12-16 12:46:38.843 [INFO][5188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" Namespace="calico-system" Pod="csi-node-driver-q9ldj" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-csi--node--driver--q9ldj-eth0" Dec 16 12:46:38.862281 systemd-networkd[1724]: cali0ace080bc07: Gained IPv6LL Dec 16 12:46:39.531542 containerd[2141]: time="2025-12-16T12:46:39.531441631Z" level=info msg="connecting to shim fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09" address="unix:///run/containerd/s/9a7eee131dc9089ab033e2243f81ba0ae7810498fa7f03094a6f55bf49292642" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:39.554478 systemd[1]: Started cri-containerd-fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09.scope - libcontainer container fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09. Dec 16 12:46:39.564000 audit: BPF prog-id=209 op=LOAD Dec 16 12:46:39.565000 audit: BPF prog-id=210 op=LOAD Dec 16 12:46:39.565000 audit[5258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5246 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653563646230393962653863383363393037613631383136363339 Dec 16 12:46:39.565000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:46:39.565000 audit[5258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5246 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653563646230393962653863383363393037613631383136363339 Dec 16 12:46:39.565000 audit: BPF prog-id=211 op=LOAD Dec 16 12:46:39.565000 audit[5258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5246 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653563646230393962653863383363393037613631383136363339 Dec 16 12:46:39.565000 audit: BPF prog-id=212 op=LOAD Dec 16 12:46:39.565000 audit[5258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5246 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653563646230393962653863383363393037613631383136363339 Dec 16 12:46:39.565000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:46:39.565000 audit[5258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5246 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653563646230393962653863383363393037613631383136363339 Dec 16 12:46:39.565000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:46:39.565000 audit[5258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5246 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653563646230393962653863383363393037613631383136363339 Dec 16 12:46:39.565000 audit: BPF prog-id=213 op=LOAD Dec 16 12:46:39.565000 audit[5258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5246 pid=5258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665653563646230393962653863383363393037613631383136363339 Dec 16 12:46:39.571364 containerd[2141]: time="2025-12-16T12:46:39.571021063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8c58856c-hs58v,Uid:1c9e39e8-3a67-4975-af12-07644724165b,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:39.572283 containerd[2141]: time="2025-12-16T12:46:39.572155666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9rsw,Uid:cf72c120-6b19-4407-a659-b4a889422882,Namespace:kube-system,Attempt:0,}" Dec 16 12:46:39.572495 containerd[2141]: time="2025-12-16T12:46:39.572369648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8447d995cc-vpb8q,Uid:f4be582d-98bf-4dca-8981-8263274550a3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:46:39.732730 containerd[2141]: time="2025-12-16T12:46:39.732674340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f76h8,Uid:7992b8dd-c425-405a-a189-bc8e22badaee,Namespace:kube-system,Attempt:0,} returns sandbox id \"fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09\"" Dec 16 12:46:39.737817 containerd[2141]: time="2025-12-16T12:46:39.737280808Z" level=info msg="CreateContainer within sandbox \"fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:46:39.745624 kubelet[3656]: E1216 12:46:39.745582 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:46:39.779000 audit[5285]: NETFILTER_CFG table=filter:124 family=2 entries=22 op=nft_register_rule pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:39.779000 audit[5285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff552da20 a2=0 a3=1 items=0 ppid=3794 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:39.782000 audit[5285]: NETFILTER_CFG table=nat:125 family=2 entries=12 op=nft_register_rule pid=5285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:39.782000 audit[5285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff552da20 a2=0 a3=1 items=0 ppid=3794 pid=5285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:39.787315 containerd[2141]: time="2025-12-16T12:46:39.787248949Z" level=info msg="connecting to shim e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03" address="unix:///run/containerd/s/615982825acefea27a03ef4caa70333355cb55847ec18b1090ee15a6d03e605e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:39.817310 systemd[1]: Started cri-containerd-e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03.scope - libcontainer container e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03. Dec 16 12:46:39.838000 audit: BPF prog-id=214 op=LOAD Dec 16 12:46:39.839000 audit: BPF prog-id=215 op=LOAD Dec 16 12:46:39.839000 audit[5305]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538316237666238316366643465666566633563363063653137323834 Dec 16 12:46:39.839000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:46:39.839000 audit[5305]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538316237666238316366643465666566633563363063653137323834 Dec 16 12:46:39.840000 audit: BPF prog-id=216 op=LOAD Dec 16 12:46:39.840000 audit[5305]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538316237666238316366643465666566633563363063653137323834 Dec 16 12:46:39.840000 audit: BPF prog-id=217 op=LOAD Dec 16 12:46:39.840000 audit[5305]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538316237666238316366643465666566633563363063653137323834 Dec 16 12:46:39.840000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:46:39.840000 audit[5305]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538316237666238316366643465666566633563363063653137323834 Dec 16 12:46:39.840000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:46:39.840000 audit[5305]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538316237666238316366643465666566633563363063653137323834 Dec 16 12:46:39.840000 audit: BPF prog-id=218 op=LOAD Dec 16 12:46:39.840000 audit[5305]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5294 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:39.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538316237666238316366643465666566633563363063653137323834 Dec 16 12:46:39.950225 systemd-networkd[1724]: cali4ca742eeae5: Gained IPv6LL Dec 16 12:46:40.123705 containerd[2141]: time="2025-12-16T12:46:40.123347597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q9ldj,Uid:c8d2f0f9-d4bf-424e-80b4-888570287c6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"e81b7fb81cfd4efefc5c60ce1728440b2020d5e45e82e4fc848dbfdb627eff03\"" Dec 16 12:46:40.128992 containerd[2141]: time="2025-12-16T12:46:40.128301060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:46:40.248968 systemd-networkd[1724]: cali8a9342abde4: Link UP Dec 16 12:46:40.249478 systemd-networkd[1724]: cali8a9342abde4: Gained carrier Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.088 [INFO][5334] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.099 [INFO][5334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0 calico-kube-controllers-6c8c58856c- calico-system 1c9e39e8-3a67-4975-af12-07644724165b 792 0 2025-12-16 12:46:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c8c58856c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-a-a4975b77c5 calico-kube-controllers-6c8c58856c-hs58v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8a9342abde4 [] [] }} ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Namespace="calico-system" Pod="calico-kube-controllers-6c8c58856c-hs58v" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.099 [INFO][5334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Namespace="calico-system" Pod="calico-kube-controllers-6c8c58856c-hs58v" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.121 [INFO][5347] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" HandleID="k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.121 [INFO][5347] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" HandleID="k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-a4975b77c5", "pod":"calico-kube-controllers-6c8c58856c-hs58v", "timestamp":"2025-12-16 12:46:40.121736236 +0000 UTC"}, Hostname:"ci-4515.1.0-a-a4975b77c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.122 [INFO][5347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.122 [INFO][5347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.122 [INFO][5347] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-a4975b77c5' Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.132 [INFO][5347] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.207 [INFO][5347] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.216 [INFO][5347] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.219 [INFO][5347] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.223 [INFO][5347] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.223 [INFO][5347] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.225 [INFO][5347] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.236 [INFO][5347] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.243 [INFO][5347] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.49.5/26] block=192.168.49.0/26 handle="k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.243 [INFO][5347] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.5/26] handle="k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.243 [INFO][5347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:40.267844 containerd[2141]: 2025-12-16 12:46:40.243 [INFO][5347] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.49.5/26] IPv6=[] ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" HandleID="k8s-pod-network.ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" Dec 16 12:46:40.268513 containerd[2141]: 2025-12-16 12:46:40.246 [INFO][5334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Namespace="calico-system" Pod="calico-kube-controllers-6c8c58856c-hs58v" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0", GenerateName:"calico-kube-controllers-6c8c58856c-", Namespace:"calico-system", SelfLink:"", UID:"1c9e39e8-3a67-4975-af12-07644724165b", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8c58856c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"", Pod:"calico-kube-controllers-6c8c58856c-hs58v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a9342abde4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:40.268513 containerd[2141]: 2025-12-16 12:46:40.246 [INFO][5334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.5/32] ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Namespace="calico-system" Pod="calico-kube-controllers-6c8c58856c-hs58v" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" Dec 16 12:46:40.268513 containerd[2141]: 2025-12-16 12:46:40.246 [INFO][5334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a9342abde4 ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Namespace="calico-system" Pod="calico-kube-controllers-6c8c58856c-hs58v" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" Dec 16 12:46:40.268513 containerd[2141]: 2025-12-16 12:46:40.249 [INFO][5334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Namespace="calico-system" Pod="calico-kube-controllers-6c8c58856c-hs58v" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" Dec 16 12:46:40.268513 containerd[2141]: 2025-12-16 12:46:40.249 [INFO][5334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Namespace="calico-system" Pod="calico-kube-controllers-6c8c58856c-hs58v" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0", GenerateName:"calico-kube-controllers-6c8c58856c-", Namespace:"calico-system", SelfLink:"", UID:"1c9e39e8-3a67-4975-af12-07644724165b", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c8c58856c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f", Pod:"calico-kube-controllers-6c8c58856c-hs58v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a9342abde4", MAC:"ea:60:f0:fd:76:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:40.268513 containerd[2141]: 2025-12-16 12:46:40.265 [INFO][5334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" Namespace="calico-system" Pod="calico-kube-controllers-6c8c58856c-hs58v" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--kube--controllers--6c8c58856c--hs58v-eth0" Dec 16 12:46:40.388528 systemd-networkd[1724]: cali5fc2ea76648: Link UP Dec 16 12:46:40.389005 systemd-networkd[1724]: cali5fc2ea76648: Gained carrier Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.146 [INFO][5353] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.157 [INFO][5353] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0 coredns-668d6bf9bc- kube-system cf72c120-6b19-4407-a659-b4a889422882 798 0 2025-12-16 12:45:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-a-a4975b77c5 coredns-668d6bf9bc-v9rsw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5fc2ea76648 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Namespace="kube-system" Pod="coredns-668d6bf9bc-v9rsw" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.157 [INFO][5353] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Namespace="kube-system" Pod="coredns-668d6bf9bc-v9rsw" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.180 [INFO][5367] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" HandleID="k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Workload="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.180 [INFO][5367] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" HandleID="k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Workload="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b16b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-a-a4975b77c5", "pod":"coredns-668d6bf9bc-v9rsw", "timestamp":"2025-12-16 12:46:40.180587912 +0000 UTC"}, Hostname:"ci-4515.1.0-a-a4975b77c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.180 [INFO][5367] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.243 [INFO][5367] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.243 [INFO][5367] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-a4975b77c5' Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.255 [INFO][5367] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.307 [INFO][5367] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.314 [INFO][5367] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.315 [INFO][5367] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.317 [INFO][5367] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.317 [INFO][5367] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.319 [INFO][5367] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620 Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.326 [INFO][5367] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.381 [INFO][5367] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.49.6/26] block=192.168.49.0/26 handle="k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.381 [INFO][5367] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.6/26] handle="k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.381 [INFO][5367] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:40.408049 containerd[2141]: 2025-12-16 12:46:40.381 [INFO][5367] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.49.6/26] IPv6=[] ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" HandleID="k8s-pod-network.761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Workload="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" Dec 16 12:46:40.409018 containerd[2141]: 2025-12-16 12:46:40.384 [INFO][5353] cni-plugin/k8s.go 418: Populated endpoint ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Namespace="kube-system" Pod="coredns-668d6bf9bc-v9rsw" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cf72c120-6b19-4407-a659-b4a889422882", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"", Pod:"coredns-668d6bf9bc-v9rsw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fc2ea76648", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:40.409018 containerd[2141]: 2025-12-16 12:46:40.384 [INFO][5353] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.6/32] ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Namespace="kube-system" Pod="coredns-668d6bf9bc-v9rsw" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" Dec 16 12:46:40.409018 containerd[2141]: 2025-12-16 12:46:40.384 [INFO][5353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fc2ea76648 ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Namespace="kube-system" Pod="coredns-668d6bf9bc-v9rsw" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" Dec 16 12:46:40.409018 containerd[2141]: 2025-12-16 12:46:40.389 [INFO][5353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Namespace="kube-system" Pod="coredns-668d6bf9bc-v9rsw" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" Dec 16 12:46:40.409018 containerd[2141]: 2025-12-16 12:46:40.390 [INFO][5353] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Namespace="kube-system" Pod="coredns-668d6bf9bc-v9rsw" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cf72c120-6b19-4407-a659-b4a889422882", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 45, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620", Pod:"coredns-668d6bf9bc-v9rsw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fc2ea76648", MAC:"d6:97:2d:c6:f3:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:40.409018 containerd[2141]: 2025-12-16 12:46:40.404 [INFO][5353] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" Namespace="kube-system" Pod="coredns-668d6bf9bc-v9rsw" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-coredns--668d6bf9bc--v9rsw-eth0" Dec 16 12:46:40.444662 systemd-networkd[1724]: cali7d770f48f2f: Link UP Dec 16 12:46:40.445271 systemd-networkd[1724]: cali7d770f48f2f: Gained carrier Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.188 [INFO][5371] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.197 [INFO][5371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0 calico-apiserver-8447d995cc- calico-apiserver f4be582d-98bf-4dca-8981-8263274550a3 800 0 2025-12-16 12:46:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8447d995cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-a-a4975b77c5 calico-apiserver-8447d995cc-vpb8q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7d770f48f2f [] [] }} ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-vpb8q" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.197 [INFO][5371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-vpb8q" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.223 [INFO][5386] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" HandleID="k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.223 [INFO][5386] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" HandleID="k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-a-a4975b77c5", "pod":"calico-apiserver-8447d995cc-vpb8q", "timestamp":"2025-12-16 12:46:40.223692955 +0000 UTC"}, Hostname:"ci-4515.1.0-a-a4975b77c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.223 [INFO][5386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.381 [INFO][5386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.382 [INFO][5386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-a4975b77c5' Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.395 [INFO][5386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.408 [INFO][5386] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.415 [INFO][5386] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.418 [INFO][5386] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.421 [INFO][5386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.421 [INFO][5386] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.423 [INFO][5386] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.428 [INFO][5386] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.438 [INFO][5386] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.49.7/26] block=192.168.49.0/26 handle="k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.439 [INFO][5386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.7/26] handle="k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.439 [INFO][5386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:40.461850 containerd[2141]: 2025-12-16 12:46:40.439 [INFO][5386] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.49.7/26] IPv6=[] ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" HandleID="k8s-pod-network.3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Workload="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" Dec 16 12:46:40.462600 containerd[2141]: 2025-12-16 12:46:40.441 [INFO][5371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-vpb8q" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0", GenerateName:"calico-apiserver-8447d995cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4be582d-98bf-4dca-8981-8263274550a3", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8447d995cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"", Pod:"calico-apiserver-8447d995cc-vpb8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d770f48f2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:40.462600 containerd[2141]: 2025-12-16 12:46:40.441 [INFO][5371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.7/32] ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-vpb8q" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" Dec 16 12:46:40.462600 containerd[2141]: 2025-12-16 12:46:40.441 [INFO][5371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d770f48f2f ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-vpb8q" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" Dec 16 12:46:40.462600 containerd[2141]: 2025-12-16 12:46:40.445 [INFO][5371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-vpb8q" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" Dec 16 12:46:40.462600 containerd[2141]: 2025-12-16 12:46:40.446 [INFO][5371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-vpb8q" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0", GenerateName:"calico-apiserver-8447d995cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4be582d-98bf-4dca-8981-8263274550a3", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8447d995cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f", Pod:"calico-apiserver-8447d995cc-vpb8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d770f48f2f", MAC:"7e:30:e3:9f:32:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:40.462600 containerd[2141]: 2025-12-16 12:46:40.457 [INFO][5371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" Namespace="calico-apiserver" Pod="calico-apiserver-8447d995cc-vpb8q" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-calico--apiserver--8447d995cc--vpb8q-eth0" Dec 16 12:46:40.564593 containerd[2141]: time="2025-12-16T12:46:40.564549580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-knbcd,Uid:50d00cc7-1203-4290-806c-1437385334b5,Namespace:calico-system,Attempt:0,}" Dec 16 12:46:40.578510 containerd[2141]: time="2025-12-16T12:46:40.578467269Z" level=info msg="Container 1f7edeb2c6ade27dc217c70dfe3d6b8e50cf2ee5c6ea80ef391bb42d506cec5b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:40.616805 containerd[2141]: time="2025-12-16T12:46:40.616748845Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:40.718272 systemd-networkd[1724]: caliea420887185: Gained IPv6LL Dec 16 12:46:40.878489 containerd[2141]: time="2025-12-16T12:46:40.878426622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:46:40.878665 containerd[2141]: time="2025-12-16T12:46:40.878632460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:40.879111 kubelet[3656]: E1216 12:46:40.878802 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:40.879111 kubelet[3656]: E1216 12:46:40.878854 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:40.879111 kubelet[3656]: E1216 12:46:40.878952 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mpmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:40.881503 containerd[2141]: time="2025-12-16T12:46:40.881471171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:46:41.432375 containerd[2141]: time="2025-12-16T12:46:41.432325738Z" level=info msg="CreateContainer within sandbox \"fee5cdb099be8c83c907a61816639e977685993f62fc22c81202c1de10fd4b09\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f7edeb2c6ade27dc217c70dfe3d6b8e50cf2ee5c6ea80ef391bb42d506cec5b\"" Dec 16 12:46:41.433511 containerd[2141]: time="2025-12-16T12:46:41.433475852Z" level=info msg="StartContainer for \"1f7edeb2c6ade27dc217c70dfe3d6b8e50cf2ee5c6ea80ef391bb42d506cec5b\"" Dec 16 12:46:41.435667 containerd[2141]: time="2025-12-16T12:46:41.435633595Z" level=info msg="connecting to shim 1f7edeb2c6ade27dc217c70dfe3d6b8e50cf2ee5c6ea80ef391bb42d506cec5b" address="unix:///run/containerd/s/9a7eee131dc9089ab033e2243f81ba0ae7810498fa7f03094a6f55bf49292642" protocol=ttrpc version=3 Dec 16 12:46:41.456303 systemd[1]: Started cri-containerd-1f7edeb2c6ade27dc217c70dfe3d6b8e50cf2ee5c6ea80ef391bb42d506cec5b.scope - libcontainer container 1f7edeb2c6ade27dc217c70dfe3d6b8e50cf2ee5c6ea80ef391bb42d506cec5b. Dec 16 12:46:41.465000 audit: BPF prog-id=219 op=LOAD Dec 16 12:46:41.466000 audit: BPF prog-id=220 op=LOAD Dec 16 12:46:41.466000 audit[5432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5246 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166376564656232633661646532376463323137633730646665336436 Dec 16 12:46:41.466000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:46:41.466000 audit[5432]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5246 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166376564656232633661646532376463323137633730646665336436 Dec 16 12:46:41.466000 audit: BPF prog-id=221 op=LOAD Dec 16 12:46:41.466000 audit[5432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5246 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166376564656232633661646532376463323137633730646665336436 Dec 16 12:46:41.466000 audit: BPF prog-id=222 op=LOAD Dec 16 12:46:41.466000 audit[5432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5246 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166376564656232633661646532376463323137633730646665336436 Dec 16 12:46:41.466000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:46:41.466000 audit[5432]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5246 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166376564656232633661646532376463323137633730646665336436 Dec 16 12:46:41.466000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:46:41.466000 audit[5432]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5246 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166376564656232633661646532376463323137633730646665336436 Dec 16 12:46:41.466000 audit: BPF prog-id=223 op=LOAD Dec 16 12:46:41.466000 audit[5432]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5246 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166376564656232633661646532376463323137633730646665336436 Dec 16 12:46:41.574346 containerd[2141]: time="2025-12-16T12:46:41.574295577Z" level=info msg="StartContainer for \"1f7edeb2c6ade27dc217c70dfe3d6b8e50cf2ee5c6ea80ef391bb42d506cec5b\" returns successfully" Dec 16 12:46:41.614389 systemd-networkd[1724]: cali8a9342abde4: Gained IPv6LL Dec 16 12:46:41.711867 containerd[2141]: time="2025-12-16T12:46:41.711707283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:41.786000 audit[5487]: NETFILTER_CFG table=filter:126 family=2 entries=22 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:41.786000 audit[5487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffc1cfcc0 a2=0 a3=1 items=0 ppid=3794 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:41.794007 kubelet[3656]: I1216 12:46:41.793015 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f76h8" podStartSLOduration=42.792995146 podStartE2EDuration="42.792995146s" podCreationTimestamp="2025-12-16 12:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:41.768953758 +0000 UTC m=+48.277607556" watchObservedRunningTime="2025-12-16 12:46:41.792995146 +0000 UTC m=+48.301648808" Dec 16 12:46:41.795000 audit[5487]: NETFILTER_CFG table=nat:127 family=2 entries=12 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:41.795000 audit[5487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc1cfcc0 a2=0 a3=1 items=0 ppid=3794 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:41.879000 audit[5507]: NETFILTER_CFG table=filter:128 family=2 entries=19 op=nft_register_rule pid=5507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:41.879000 audit[5507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffede21ae0 a2=0 a3=1 items=0 ppid=3794 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:41.885000 audit[5507]: NETFILTER_CFG table=nat:129 family=2 entries=33 op=nft_register_chain pid=5507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:41.885000 audit[5507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffede21ae0 a2=0 a3=1 items=0 ppid=3794 pid=5507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:41.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:41.942076 systemd-networkd[1724]: cali10308805ad6: Link UP Dec 16 12:46:41.942339 systemd-networkd[1724]: cali10308805ad6: Gained carrier Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.847 [INFO][5489] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.858 [INFO][5489] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0 goldmane-666569f655- calico-system 50d00cc7-1203-4290-806c-1437385334b5 799 0 2025-12-16 12:46:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-a-a4975b77c5 goldmane-666569f655-knbcd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali10308805ad6 [] [] }} ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Namespace="calico-system" Pod="goldmane-666569f655-knbcd" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.858 [INFO][5489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Namespace="calico-system" Pod="goldmane-666569f655-knbcd" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.889 [INFO][5501] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" HandleID="k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Workload="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.889 [INFO][5501] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" HandleID="k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Workload="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cafe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-a-a4975b77c5", "pod":"goldmane-666569f655-knbcd", "timestamp":"2025-12-16 12:46:41.889818006 +0000 UTC"}, Hostname:"ci-4515.1.0-a-a4975b77c5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.889 [INFO][5501] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.890 [INFO][5501] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.890 [INFO][5501] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-a-a4975b77c5' Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.896 [INFO][5501] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.902 [INFO][5501] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.909 [INFO][5501] ipam/ipam.go 511: Trying affinity for 192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.911 [INFO][5501] ipam/ipam.go 158: Attempting to load block cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.914 [INFO][5501] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.914 [INFO][5501] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.915 [INFO][5501] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.921 [INFO][5501] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.937 [INFO][5501] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.49.8/26] block=192.168.49.0/26 handle="k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.938 [INFO][5501] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.49.8/26] handle="k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" host="ci-4515.1.0-a-a4975b77c5" Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.938 [INFO][5501] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:46:41.959950 containerd[2141]: 2025-12-16 12:46:41.938 [INFO][5501] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.49.8/26] IPv6=[] ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" HandleID="k8s-pod-network.4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Workload="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" Dec 16 12:46:41.960463 containerd[2141]: 2025-12-16 12:46:41.939 [INFO][5489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Namespace="calico-system" Pod="goldmane-666569f655-knbcd" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50d00cc7-1203-4290-806c-1437385334b5", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"", Pod:"goldmane-666569f655-knbcd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.49.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali10308805ad6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:41.960463 containerd[2141]: 2025-12-16 12:46:41.940 [INFO][5489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.49.8/32] ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Namespace="calico-system" Pod="goldmane-666569f655-knbcd" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" Dec 16 12:46:41.960463 containerd[2141]: 2025-12-16 12:46:41.940 [INFO][5489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10308805ad6 ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Namespace="calico-system" Pod="goldmane-666569f655-knbcd" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" Dec 16 12:46:41.960463 containerd[2141]: 2025-12-16 12:46:41.942 [INFO][5489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Namespace="calico-system" Pod="goldmane-666569f655-knbcd" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" Dec 16 12:46:41.960463 containerd[2141]: 2025-12-16 12:46:41.943 [INFO][5489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Namespace="calico-system" Pod="goldmane-666569f655-knbcd" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50d00cc7-1203-4290-806c-1437385334b5", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 46, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-a-a4975b77c5", ContainerID:"4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb", Pod:"goldmane-666569f655-knbcd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.49.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali10308805ad6", MAC:"76:f5:60:6e:9e:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:46:41.960463 containerd[2141]: 2025-12-16 12:46:41.955 [INFO][5489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" Namespace="calico-system" Pod="goldmane-666569f655-knbcd" WorkloadEndpoint="ci--4515.1.0--a--a4975b77c5-k8s-goldmane--666569f655--knbcd-eth0" Dec 16 12:46:42.062335 systemd-networkd[1724]: cali5fc2ea76648: Gained IPv6LL Dec 16 12:46:42.076368 containerd[2141]: time="2025-12-16T12:46:42.076202970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:46:42.076368 containerd[2141]: time="2025-12-16T12:46:42.076321565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:42.076599 kubelet[3656]: E1216 12:46:42.076552 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:42.076953 kubelet[3656]: E1216 12:46:42.076607 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:42.076953 kubelet[3656]: E1216 12:46:42.076710 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mpmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:42.077888 kubelet[3656]: E1216 12:46:42.077845 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:46:42.254344 systemd-networkd[1724]: cali7d770f48f2f: Gained IPv6LL Dec 16 12:46:42.347610 containerd[2141]: time="2025-12-16T12:46:42.347124540Z" level=info msg="connecting to shim ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f" address="unix:///run/containerd/s/f9391a8d8e526af589136ca974043fc500b0770606dc49b357ffadba8fd59bda" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:42.349135 containerd[2141]: time="2025-12-16T12:46:42.347406172Z" level=info msg="connecting to shim 3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f" address="unix:///run/containerd/s/d02384613366391c05784308da0dd6609aaec72309a49eb8bf2e05ea88431b48" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:42.381733 systemd[1]: Started cri-containerd-ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f.scope - libcontainer container ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f. Dec 16 12:46:42.396533 systemd[1]: Started cri-containerd-3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f.scope - libcontainer container 3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f. Dec 16 12:46:42.397565 containerd[2141]: time="2025-12-16T12:46:42.397441374Z" level=info msg="connecting to shim 761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620" address="unix:///run/containerd/s/49ab85c7d14bc7d6f5c1caea4bdf60f3d77281d88feebbf50aa6baa96c3b3fad" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:42.403000 audit: BPF prog-id=224 op=LOAD Dec 16 12:46:42.404000 audit: BPF prog-id=225 op=LOAD Dec 16 12:46:42.404000 audit[5556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=5530 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361383331666131663534303038363561303038313836626164323135 Dec 16 12:46:42.404000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:46:42.404000 audit[5556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361383331666131663534303038363561303038313836626164323135 Dec 16 12:46:42.405000 audit: BPF prog-id=226 op=LOAD Dec 16 12:46:42.405000 audit[5556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=5530 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361383331666131663534303038363561303038313836626164323135 Dec 16 12:46:42.405000 audit: BPF prog-id=227 op=LOAD Dec 16 12:46:42.405000 audit[5556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=5530 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.405000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361383331666131663534303038363561303038313836626164323135 Dec 16 12:46:42.406000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:46:42.406000 audit[5556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361383331666131663534303038363561303038313836626164323135 Dec 16 12:46:42.406000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:46:42.406000 audit[5556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361383331666131663534303038363561303038313836626164323135 Dec 16 12:46:42.406000 audit: BPF prog-id=228 op=LOAD Dec 16 12:46:42.406000 audit[5556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=5530 pid=5556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361383331666131663534303038363561303038313836626164323135 Dec 16 12:46:42.419000 audit: BPF prog-id=229 op=LOAD Dec 16 12:46:42.420000 audit: BPF prog-id=230 op=LOAD Dec 16 12:46:42.420000 audit[5558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5536 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366373133646362393233666262393731336665383134353433333365 Dec 16 12:46:42.420000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:46:42.420000 audit[5558]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5536 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366373133646362393233666262393731336665383134353433333365 Dec 16 12:46:42.421000 audit: BPF prog-id=231 op=LOAD Dec 16 12:46:42.421000 audit[5558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5536 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366373133646362393233666262393731336665383134353433333365 Dec 16 12:46:42.421000 audit: BPF prog-id=232 op=LOAD Dec 16 12:46:42.421000 audit[5558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5536 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366373133646362393233666262393731336665383134353433333365 Dec 16 12:46:42.421000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:46:42.421000 audit[5558]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5536 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366373133646362393233666262393731336665383134353433333365 Dec 16 12:46:42.421000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:46:42.421000 audit[5558]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5536 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.421000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366373133646362393233666262393731336665383134353433333365 Dec 16 12:46:42.422000 audit: BPF prog-id=233 op=LOAD Dec 16 12:46:42.422000 audit[5558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5536 pid=5558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366373133646362393233666262393731336665383134353433333365 Dec 16 12:46:42.446816 systemd[1]: Started cri-containerd-761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620.scope - libcontainer container 761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620. Dec 16 12:46:42.469000 audit: BPF prog-id=234 op=LOAD Dec 16 12:46:42.470000 audit: BPF prog-id=235 op=LOAD Dec 16 12:46:42.470000 audit[5614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=5595 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313830326237333838653838353433393431346139633039303366 Dec 16 12:46:42.470000 audit: BPF prog-id=235 op=UNLOAD Dec 16 12:46:42.470000 audit[5614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5595 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313830326237333838653838353433393431346139633039303366 Dec 16 12:46:42.470000 audit: BPF prog-id=236 op=LOAD Dec 16 12:46:42.470000 audit[5614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=5595 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313830326237333838653838353433393431346139633039303366 Dec 16 12:46:42.470000 audit: BPF prog-id=237 op=LOAD Dec 16 12:46:42.470000 audit[5614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=5595 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313830326237333838653838353433393431346139633039303366 Dec 16 12:46:42.470000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:46:42.470000 audit[5614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5595 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313830326237333838653838353433393431346139633039303366 Dec 16 12:46:42.470000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:46:42.470000 audit[5614]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5595 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313830326237333838653838353433393431346139633039303366 Dec 16 12:46:42.470000 audit: BPF prog-id=238 op=LOAD Dec 16 12:46:42.470000 audit[5614]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=5595 pid=5614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:42.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736313830326237333838653838353433393431346139633039303366 Dec 16 12:46:42.756971 kubelet[3656]: E1216 12:46:42.756834 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:46:43.663297 systemd-networkd[1724]: cali10308805ad6: Gained IPv6LL Dec 16 12:46:45.547931 kubelet[3656]: I1216 12:46:45.547880 3656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:46:45.569150 containerd[2141]: time="2025-12-16T12:46:45.568955705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:46:45.589000 audit[5695]: NETFILTER_CFG table=filter:130 family=2 entries=15 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:45.592630 kernel: kauditd_printk_skb: 162 callbacks suppressed Dec 16 12:46:45.592737 kernel: audit: type=1325 audit(1765889205.589:677): table=filter:130 family=2 entries=15 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:45.589000 audit[5695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe23ba8c0 a2=0 a3=1 items=0 ppid=3794 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:45.622344 kernel: audit: type=1300 audit(1765889205.589:677): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe23ba8c0 a2=0 a3=1 items=0 ppid=3794 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:45.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:45.633707 kernel: audit: type=1327 audit(1765889205.589:677): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:45.607000 audit[5695]: NETFILTER_CFG table=nat:131 family=2 entries=25 op=nft_register_chain pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:45.642685 kernel: audit: type=1325 audit(1765889205.607:678): table=nat:131 family=2 entries=25 op=nft_register_chain pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:45.607000 audit[5695]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffe23ba8c0 a2=0 a3=1 items=0 ppid=3794 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:45.660792 kernel: audit: type=1300 audit(1765889205.607:678): arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffe23ba8c0 a2=0 a3=1 items=0 ppid=3794 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:45.671530 kernel: audit: type=1327 audit(1765889205.607:678): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:45.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:45.822190 containerd[2141]: time="2025-12-16T12:46:45.822006571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c8c58856c-hs58v,Uid:1c9e39e8-3a67-4975-af12-07644724165b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca831fa1f5400865a008186bad2154c1fba40780326c22f13d708c4daccbe51f\"" Dec 16 12:46:47.324000 audit: BPF prog-id=239 op=LOAD Dec 16 12:46:47.324000 audit[5720]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3d8f858 a2=98 a3=ffffe3d8f848 items=0 ppid=5699 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.349220 kernel: audit: type=1334 audit(1765889207.324:679): prog-id=239 op=LOAD Dec 16 12:46:47.349361 kernel: audit: type=1300 audit(1765889207.324:679): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3d8f858 a2=98 a3=ffffe3d8f848 items=0 ppid=5699 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.324000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:47.369095 kernel: audit: type=1327 audit(1765889207.324:679): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:47.324000 audit: BPF prog-id=239 op=UNLOAD Dec 16 12:46:47.374638 kernel: audit: type=1334 audit(1765889207.324:680): prog-id=239 op=UNLOAD Dec 16 12:46:47.324000 audit[5720]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe3d8f828 a3=0 items=0 ppid=5699 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.324000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:47.324000 audit: BPF prog-id=240 op=LOAD Dec 16 12:46:47.324000 audit[5720]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3d8f708 a2=74 a3=95 items=0 ppid=5699 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.324000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:47.324000 audit: BPF prog-id=240 op=UNLOAD Dec 16 12:46:47.324000 audit[5720]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5699 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.324000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:47.324000 audit: BPF prog-id=241 op=LOAD Dec 16 12:46:47.324000 audit[5720]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe3d8f738 a2=40 a3=ffffe3d8f768 items=0 ppid=5699 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.324000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:47.324000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:46:47.324000 audit[5720]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe3d8f768 items=0 ppid=5699 pid=5720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.324000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:46:47.330000 audit: BPF prog-id=242 op=LOAD Dec 16 12:46:47.330000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2d365e8 a2=98 a3=ffffc2d365d8 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.330000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.349000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:46:47.349000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc2d365b8 a3=0 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.349000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.350000 audit: BPF prog-id=243 op=LOAD Dec 16 12:46:47.350000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc2d36278 a2=74 a3=95 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.350000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.350000 audit: BPF prog-id=243 op=UNLOAD Dec 16 12:46:47.350000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.350000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.350000 audit: BPF prog-id=244 op=LOAD Dec 16 12:46:47.350000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc2d362d8 a2=94 a3=2 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.350000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.368000 audit: BPF prog-id=244 op=UNLOAD Dec 16 12:46:47.368000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.368000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.458000 audit: BPF prog-id=245 op=LOAD Dec 16 12:46:47.458000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc2d36298 a2=40 a3=ffffc2d362c8 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.458000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.459000 audit: BPF prog-id=245 op=UNLOAD Dec 16 12:46:47.459000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc2d362c8 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.459000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.465000 audit: BPF prog-id=246 op=LOAD Dec 16 12:46:47.465000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc2d362a8 a2=94 a3=4 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.465000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:46:47.465000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.465000 audit: BPF prog-id=247 op=LOAD Dec 16 12:46:47.465000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2d360e8 a2=94 a3=5 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.465000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:46:47.465000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.465000 audit: BPF prog-id=248 op=LOAD Dec 16 12:46:47.465000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc2d36318 a2=94 a3=6 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.466000 audit: BPF prog-id=248 op=UNLOAD Dec 16 12:46:47.466000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.466000 audit: BPF prog-id=249 op=LOAD Dec 16 12:46:47.466000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc2d35ae8 a2=94 a3=83 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.466000 audit: BPF prog-id=250 op=LOAD Dec 16 12:46:47.466000 audit[5721]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc2d358a8 a2=94 a3=2 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.466000 audit: BPF prog-id=250 op=UNLOAD Dec 16 12:46:47.466000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.467000 audit: BPF prog-id=249 op=UNLOAD Dec 16 12:46:47.467000 audit[5721]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=eb85620 a3=eb78b00 items=0 ppid=5699 pid=5721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.467000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:46:47.475000 audit: BPF prog-id=251 op=LOAD Dec 16 12:46:47.475000 audit[5725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9c4abe8 a2=98 a3=ffffc9c4abd8 items=0 ppid=5699 pid=5725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.475000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:47.475000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:46:47.475000 audit[5725]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc9c4abb8 a3=0 items=0 ppid=5699 pid=5725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.475000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:47.475000 audit: BPF prog-id=252 op=LOAD Dec 16 12:46:47.475000 audit[5725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9c4aa98 a2=74 a3=95 items=0 ppid=5699 pid=5725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.475000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:47.475000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:46:47.475000 audit[5725]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5699 pid=5725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.475000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:47.475000 audit: BPF prog-id=253 op=LOAD Dec 16 12:46:47.475000 audit[5725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9c4aac8 a2=40 a3=ffffc9c4aaf8 items=0 ppid=5699 pid=5725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.475000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:47.475000 audit: BPF prog-id=253 op=UNLOAD Dec 16 12:46:47.475000 audit[5725]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc9c4aaf8 items=0 ppid=5699 pid=5725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:47.475000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:46:48.422760 containerd[2141]: time="2025-12-16T12:46:48.422639236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8447d995cc-vpb8q,Uid:f4be582d-98bf-4dca-8981-8263274550a3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3f713dcb923fbb9713fe81454333e7db5f1d7bbbcf39fc8b5f88647b9e20ca1f\"" Dec 16 12:46:48.597032 systemd-networkd[1724]: vxlan.calico: Link UP Dec 16 12:46:48.597040 systemd-networkd[1724]: vxlan.calico: Gained carrier Dec 16 12:46:48.626000 audit: BPF prog-id=254 op=LOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc8a79c8 a2=98 a3=fffffc8a79b8 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=254 op=UNLOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffc8a7998 a3=0 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=255 op=LOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc8a76a8 a2=74 a3=95 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=255 op=UNLOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=256 op=LOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc8a7708 a2=94 a3=2 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=256 op=UNLOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=257 op=LOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc8a7588 a2=40 a3=fffffc8a75b8 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=257 op=UNLOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=fffffc8a75b8 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=258 op=LOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc8a76d8 a2=94 a3=b7 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.626000 audit: BPF prog-id=258 op=UNLOAD Dec 16 12:46:48.626000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.626000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.629000 audit: BPF prog-id=259 op=LOAD Dec 16 12:46:48.629000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc8a6d88 a2=94 a3=2 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.629000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.629000 audit: BPF prog-id=259 op=UNLOAD Dec 16 12:46:48.629000 audit[5781]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.629000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.629000 audit: BPF prog-id=260 op=LOAD Dec 16 12:46:48.629000 audit[5781]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc8a6f18 a2=94 a3=30 items=0 ppid=5699 pid=5781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.629000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:46:48.637000 audit: BPF prog-id=261 op=LOAD Dec 16 12:46:48.637000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd64b3668 a2=98 a3=ffffd64b3658 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.637000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.638000 audit: BPF prog-id=261 op=UNLOAD Dec 16 12:46:48.638000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd64b3638 a3=0 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.638000 audit: BPF prog-id=262 op=LOAD Dec 16 12:46:48.638000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd64b32f8 a2=74 a3=95 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.638000 audit: BPF prog-id=262 op=UNLOAD Dec 16 12:46:48.638000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.638000 audit: BPF prog-id=263 op=LOAD Dec 16 12:46:48.638000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd64b3358 a2=94 a3=2 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.638000 audit: BPF prog-id=263 op=UNLOAD Dec 16 12:46:48.638000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.638000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.720428 containerd[2141]: time="2025-12-16T12:46:48.719752209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v9rsw,Uid:cf72c120-6b19-4407-a659-b4a889422882,Namespace:kube-system,Attempt:0,} returns sandbox id \"761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620\"" Dec 16 12:46:48.724570 containerd[2141]: time="2025-12-16T12:46:48.724535796Z" level=info msg="CreateContainer within sandbox \"761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:46:48.736000 audit: BPF prog-id=264 op=LOAD Dec 16 12:46:48.736000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd64b3318 a2=40 a3=ffffd64b3348 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.736000 audit: BPF prog-id=264 op=UNLOAD Dec 16 12:46:48.736000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd64b3348 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.736000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.743000 audit: BPF prog-id=265 op=LOAD Dec 16 12:46:48.743000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd64b3328 a2=94 a3=4 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.743000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.743000 audit: BPF prog-id=265 op=UNLOAD Dec 16 12:46:48.743000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.743000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.744000 audit: BPF prog-id=266 op=LOAD Dec 16 12:46:48.744000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd64b3168 a2=94 a3=5 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.744000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.744000 audit: BPF prog-id=266 op=UNLOAD Dec 16 12:46:48.744000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.744000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.744000 audit: BPF prog-id=267 op=LOAD Dec 16 12:46:48.744000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd64b3398 a2=94 a3=6 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.744000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.744000 audit: BPF prog-id=267 op=UNLOAD Dec 16 12:46:48.744000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.744000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.744000 audit: BPF prog-id=268 op=LOAD Dec 16 12:46:48.744000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd64b2b68 a2=94 a3=83 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.744000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.745000 audit: BPF prog-id=269 op=LOAD Dec 16 12:46:48.745000 audit[5785]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd64b2928 a2=94 a3=2 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.745000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.745000 audit: BPF prog-id=269 op=UNLOAD Dec 16 12:46:48.745000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.745000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.745000 audit: BPF prog-id=268 op=UNLOAD Dec 16 12:46:48.745000 audit[5785]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=15566620 a3=15559b00 items=0 ppid=5699 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.745000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:46:48.750000 audit: BPF prog-id=260 op=UNLOAD Dec 16 12:46:48.750000 audit[5699]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000ec4240 a2=0 a3=0 items=0 ppid=4871 pid=5699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:48.750000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:46:49.217000 audit[5818]: NETFILTER_CFG table=raw:132 family=2 entries=21 op=nft_register_chain pid=5818 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:49.217000 audit[5818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffda37d220 a2=0 a3=ffffb776efa8 items=0 ppid=5699 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:49.217000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:49.219000 audit[5821]: NETFILTER_CFG table=nat:133 family=2 entries=15 op=nft_register_chain pid=5821 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:49.219000 audit[5821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc4fbc280 a2=0 a3=ffff9924efa8 items=0 ppid=5699 pid=5821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:49.219000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:49.221000 audit[5819]: NETFILTER_CFG table=mangle:134 family=2 entries=16 op=nft_register_chain pid=5819 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:49.221000 audit[5819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffedc6ccf0 a2=0 a3=ffff96a98fa8 items=0 ppid=5699 pid=5819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:49.221000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:49.396000 audit[5825]: NETFILTER_CFG table=filter:135 family=2 entries=315 op=nft_register_chain pid=5825 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:46:49.396000 audit[5825]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=187764 a0=3 a1=ffffe588b210 a2=0 a3=ffffa19e2fa8 items=0 ppid=5699 pid=5825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:49.396000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:46:49.554193 containerd[2141]: time="2025-12-16T12:46:49.554133663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:50.254801 systemd-networkd[1724]: vxlan.calico: Gained IPv6LL Dec 16 12:46:51.523199 containerd[2141]: time="2025-12-16T12:46:51.522975665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:46:51.523199 containerd[2141]: time="2025-12-16T12:46:51.523104757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:51.523944 kubelet[3656]: E1216 12:46:51.523892 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:51.523944 kubelet[3656]: E1216 12:46:51.523945 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:46:51.526249 kubelet[3656]: E1216 12:46:51.524135 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a24d6b5f1b4428ea6052e9fe2a904c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvdr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-fcd869d9b-bwx78_calico-system(7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:51.526733 containerd[2141]: time="2025-12-16T12:46:51.524298584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:46:51.785171 containerd[2141]: time="2025-12-16T12:46:51.785017353Z" level=info msg="connecting to shim 4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb" address="unix:///run/containerd/s/e627ca73a5c887070ab9eb87e2c8bcfcc272147e0ac78ca9b6508f6179bbf6b7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:46:51.811314 systemd[1]: Started cri-containerd-4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb.scope - libcontainer container 4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb. Dec 16 12:46:51.819000 audit: BPF prog-id=270 op=LOAD Dec 16 12:46:51.824059 kernel: kauditd_printk_skb: 194 callbacks suppressed Dec 16 12:46:51.824132 kernel: audit: type=1334 audit(1765889211.819:745): prog-id=270 op=LOAD Dec 16 12:46:51.828000 audit: BPF prog-id=271 op=LOAD Dec 16 12:46:51.834925 kernel: audit: type=1334 audit(1765889211.828:746): prog-id=271 op=LOAD Dec 16 12:46:51.828000 audit[5853]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.852019 kernel: audit: type=1300 audit(1765889211.828:746): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.871467 kernel: audit: type=1327 audit(1765889211.828:746): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.828000 audit: BPF prog-id=271 op=UNLOAD Dec 16 12:46:51.880834 kernel: audit: type=1334 audit(1765889211.828:747): prog-id=271 op=UNLOAD Dec 16 12:46:51.882672 containerd[2141]: time="2025-12-16T12:46:51.882136324Z" level=info msg="Container 7fbc3c9bbe69eafffb8ba5d0e74c608da96b7abdcaac4ac2646f33bc10b88211: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:46:51.828000 audit[5853]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.901507 kernel: audit: type=1300 audit(1765889211.828:747): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.920267 kernel: audit: type=1327 audit(1765889211.828:747): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.828000 audit: BPF prog-id=272 op=LOAD Dec 16 12:46:51.925915 kernel: audit: type=1334 audit(1765889211.828:748): prog-id=272 op=LOAD Dec 16 12:46:51.828000 audit[5853]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.946315 kernel: audit: type=1300 audit(1765889211.828:748): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.966255 kernel: audit: type=1327 audit(1765889211.828:748): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.828000 audit: BPF prog-id=273 op=LOAD Dec 16 12:46:51.828000 audit[5853]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.828000 audit: BPF prog-id=273 op=UNLOAD Dec 16 12:46:51.828000 audit[5853]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.828000 audit: BPF prog-id=272 op=UNLOAD Dec 16 12:46:51.828000 audit[5853]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:51.828000 audit: BPF prog-id=274 op=LOAD Dec 16 12:46:51.828000 audit[5853]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5842 pid=5853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:51.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462333463623665396338373431323432633038663932396461626239 Dec 16 12:46:52.026572 containerd[2141]: time="2025-12-16T12:46:52.026472381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:52.029672 containerd[2141]: time="2025-12-16T12:46:52.029630795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-knbcd,Uid:50d00cc7-1203-4290-806c-1437385334b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b34cb6e9c8741242c08f929dabb97fdd7ead9d4a8e72109a8ae6d5e10e6c1fb\"" Dec 16 12:46:52.034052 containerd[2141]: time="2025-12-16T12:46:52.034003421Z" level=info msg="CreateContainer within sandbox \"761802b7388e885439414a9c0903f0c206835574a06094c8418bed0e46209620\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7fbc3c9bbe69eafffb8ba5d0e74c608da96b7abdcaac4ac2646f33bc10b88211\"" Dec 16 12:46:52.034679 containerd[2141]: time="2025-12-16T12:46:52.034653977Z" level=info msg="StartContainer for \"7fbc3c9bbe69eafffb8ba5d0e74c608da96b7abdcaac4ac2646f33bc10b88211\"" Dec 16 12:46:52.035619 containerd[2141]: time="2025-12-16T12:46:52.035521794Z" level=info msg="connecting to shim 7fbc3c9bbe69eafffb8ba5d0e74c608da96b7abdcaac4ac2646f33bc10b88211" address="unix:///run/containerd/s/49ab85c7d14bc7d6f5c1caea4bdf60f3d77281d88feebbf50aa6baa96c3b3fad" protocol=ttrpc version=3 Dec 16 12:46:52.062555 systemd[1]: Started cri-containerd-7fbc3c9bbe69eafffb8ba5d0e74c608da96b7abdcaac4ac2646f33bc10b88211.scope - libcontainer container 7fbc3c9bbe69eafffb8ba5d0e74c608da96b7abdcaac4ac2646f33bc10b88211. Dec 16 12:46:52.072000 audit: BPF prog-id=275 op=LOAD Dec 16 12:46:52.073000 audit: BPF prog-id=276 op=LOAD Dec 16 12:46:52.073000 audit[5878]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5595 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766626333633962626536396561666666623862613564306537346336 Dec 16 12:46:52.073000 audit: BPF prog-id=276 op=UNLOAD Dec 16 12:46:52.073000 audit[5878]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5595 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766626333633962626536396561666666623862613564306537346336 Dec 16 12:46:52.074826 containerd[2141]: time="2025-12-16T12:46:52.074676192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:46:52.074826 containerd[2141]: time="2025-12-16T12:46:52.074783515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:52.073000 audit: BPF prog-id=277 op=LOAD Dec 16 12:46:52.073000 audit[5878]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5595 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766626333633962626536396561666666623862613564306537346336 Dec 16 12:46:52.073000 audit: BPF prog-id=278 op=LOAD Dec 16 12:46:52.075233 kubelet[3656]: E1216 12:46:52.075036 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:52.075233 kubelet[3656]: E1216 12:46:52.075112 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:46:52.073000 audit[5878]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5595 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.073000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766626333633962626536396561666666623862613564306537346336 Dec 16 12:46:52.074000 audit: BPF prog-id=278 op=UNLOAD Dec 16 12:46:52.074000 audit[5878]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5595 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766626333633962626536396561666666623862613564306537346336 Dec 16 12:46:52.074000 audit: BPF prog-id=277 op=UNLOAD Dec 16 12:46:52.074000 audit[5878]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5595 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766626333633962626536396561666666623862613564306537346336 Dec 16 12:46:52.075664 kubelet[3656]: E1216 12:46:52.075461 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcrwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c8c58856c-hs58v_calico-system(1c9e39e8-3a67-4975-af12-07644724165b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:52.074000 audit: BPF prog-id=279 op=LOAD Dec 16 12:46:52.074000 audit[5878]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5595 pid=5878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.074000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766626333633962626536396561666666623862613564306537346336 Dec 16 12:46:52.076800 containerd[2141]: time="2025-12-16T12:46:52.076761926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:52.077530 kubelet[3656]: E1216 12:46:52.077313 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:46:52.126986 containerd[2141]: time="2025-12-16T12:46:52.126944252Z" level=info msg="StartContainer for \"7fbc3c9bbe69eafffb8ba5d0e74c608da96b7abdcaac4ac2646f33bc10b88211\" returns successfully" Dec 16 12:46:52.391190 containerd[2141]: time="2025-12-16T12:46:52.390917062Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:52.393970 containerd[2141]: time="2025-12-16T12:46:52.393863918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:52.393970 containerd[2141]: time="2025-12-16T12:46:52.393915920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:52.394290 kubelet[3656]: E1216 12:46:52.394244 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:52.394355 kubelet[3656]: E1216 12:46:52.394300 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:52.394566 kubelet[3656]: E1216 12:46:52.394492 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8447d995cc-vpb8q_calico-apiserver(f4be582d-98bf-4dca-8981-8263274550a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:52.394980 containerd[2141]: time="2025-12-16T12:46:52.394936502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:46:52.396307 kubelet[3656]: E1216 12:46:52.396153 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:46:52.687591 containerd[2141]: time="2025-12-16T12:46:52.687443114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:52.690366 containerd[2141]: time="2025-12-16T12:46:52.690255165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:46:52.690366 containerd[2141]: time="2025-12-16T12:46:52.690309527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:52.690560 kubelet[3656]: E1216 12:46:52.690514 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:52.691040 kubelet[3656]: E1216 12:46:52.690581 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:46:52.691071 containerd[2141]: time="2025-12-16T12:46:52.690918969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:46:52.691473 kubelet[3656]: E1216 12:46:52.690793 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg9nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8447d995cc-x57ls_calico-apiserver(c9372ebb-481a-480c-8bf1-ba7918503e79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:52.692493 kubelet[3656]: E1216 12:46:52.692448 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:46:52.783695 kubelet[3656]: E1216 12:46:52.783472 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:46:52.783695 kubelet[3656]: E1216 12:46:52.783550 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:46:52.798836 kubelet[3656]: I1216 12:46:52.798698 3656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v9rsw" podStartSLOduration=53.798677601 podStartE2EDuration="53.798677601s" podCreationTimestamp="2025-12-16 12:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:46:52.798356751 +0000 UTC m=+59.307010413" watchObservedRunningTime="2025-12-16 12:46:52.798677601 +0000 UTC m=+59.307331263" Dec 16 12:46:52.823000 audit[5911]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:52.823000 audit[5911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9985b00 a2=0 a3=1 items=0 ppid=3794 pid=5911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:52.827000 audit[5911]: NETFILTER_CFG table=nat:137 family=2 entries=44 op=nft_register_rule pid=5911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:52.827000 audit[5911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff9985b00 a2=0 a3=1 items=0 ppid=3794 pid=5911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:52.853000 audit[5913]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:52.853000 audit[5913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc2240230 a2=0 a3=1 items=0 ppid=3794 pid=5913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:52.860000 audit[5913]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:52.860000 audit[5913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc2240230 a2=0 a3=1 items=0 ppid=3794 pid=5913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:52.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:52.997459 containerd[2141]: time="2025-12-16T12:46:52.997328619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:53.000253 containerd[2141]: time="2025-12-16T12:46:53.000139582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:46:53.000253 containerd[2141]: time="2025-12-16T12:46:53.000205248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:53.000439 kubelet[3656]: E1216 12:46:53.000397 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:53.000499 kubelet[3656]: E1216 12:46:53.000452 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:46:53.001015 kubelet[3656]: E1216 12:46:53.000876 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvdr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-fcd869d9b-bwx78_calico-system(7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:53.001157 containerd[2141]: time="2025-12-16T12:46:53.000946798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:46:53.002386 kubelet[3656]: E1216 12:46:53.002349 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:46:53.296166 containerd[2141]: time="2025-12-16T12:46:53.296114265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:53.299198 containerd[2141]: time="2025-12-16T12:46:53.299154980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:46:53.299293 containerd[2141]: time="2025-12-16T12:46:53.299258431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:53.299502 kubelet[3656]: E1216 12:46:53.299464 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:53.299542 kubelet[3656]: E1216 12:46:53.299516 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:46:53.299661 kubelet[3656]: E1216 12:46:53.299627 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx7pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-knbcd_calico-system(50d00cc7-1203-4290-806c-1437385334b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:53.301072 kubelet[3656]: E1216 12:46:53.300995 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:46:53.784638 kubelet[3656]: E1216 12:46:53.784549 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:46:53.838000 audit[5917]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:53.838000 audit[5917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffebf4dc40 a2=0 a3=1 items=0 ppid=3794 pid=5917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:53.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:53.861000 audit[5917]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5917 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:46:53.861000 audit[5917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffebf4dc40 a2=0 a3=1 items=0 ppid=3794 pid=5917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:46:53.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:46:56.565657 containerd[2141]: time="2025-12-16T12:46:56.564911605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:46:56.815061 containerd[2141]: time="2025-12-16T12:46:56.814816892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:56.820369 containerd[2141]: time="2025-12-16T12:46:56.820190268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:46:56.820369 containerd[2141]: time="2025-12-16T12:46:56.820233117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:56.820761 kubelet[3656]: E1216 12:46:56.820726 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:56.821301 kubelet[3656]: E1216 12:46:56.821117 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:46:56.821301 kubelet[3656]: E1216 12:46:56.821250 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mpmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:56.823633 containerd[2141]: time="2025-12-16T12:46:56.823566608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:46:57.128657 containerd[2141]: time="2025-12-16T12:46:57.128477261Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:46:57.133942 containerd[2141]: time="2025-12-16T12:46:57.133889678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:46:57.134022 containerd[2141]: time="2025-12-16T12:46:57.133927407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:46:57.134230 kubelet[3656]: E1216 12:46:57.134183 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:57.134289 kubelet[3656]: E1216 12:46:57.134245 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:46:57.134378 kubelet[3656]: E1216 12:46:57.134347 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mpmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:46:57.135487 kubelet[3656]: E1216 12:46:57.135454 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:47:03.565446 kubelet[3656]: E1216 12:47:03.565378 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:47:05.568203 containerd[2141]: time="2025-12-16T12:47:05.568145296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:47:05.791501 containerd[2141]: time="2025-12-16T12:47:05.791441769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:05.795121 containerd[2141]: time="2025-12-16T12:47:05.794782203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:47:05.795121 containerd[2141]: time="2025-12-16T12:47:05.794896870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:05.796149 kubelet[3656]: E1216 12:47:05.795063 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:05.796149 kubelet[3656]: E1216 12:47:05.795131 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:05.796149 kubelet[3656]: E1216 12:47:05.795240 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcrwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c8c58856c-hs58v_calico-system(1c9e39e8-3a67-4975-af12-07644724165b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:05.796725 kubelet[3656]: E1216 12:47:05.796665 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:47:08.568422 containerd[2141]: time="2025-12-16T12:47:08.568379614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:08.568962 kubelet[3656]: E1216 12:47:08.568551 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:47:08.845481 containerd[2141]: time="2025-12-16T12:47:08.844874521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:08.849188 containerd[2141]: time="2025-12-16T12:47:08.849048737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:08.849188 containerd[2141]: time="2025-12-16T12:47:08.849151604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:08.849359 kubelet[3656]: E1216 12:47:08.849315 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:08.849415 kubelet[3656]: E1216 12:47:08.849364 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:08.849599 kubelet[3656]: E1216 12:47:08.849564 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8447d995cc-vpb8q_calico-apiserver(f4be582d-98bf-4dca-8981-8263274550a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:08.850784 kubelet[3656]: E1216 12:47:08.850703 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:47:08.850930 containerd[2141]: time="2025-12-16T12:47:08.850905859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:47:09.100272 containerd[2141]: time="2025-12-16T12:47:09.099734080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:09.102944 containerd[2141]: time="2025-12-16T12:47:09.102898205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:47:09.103119 containerd[2141]: time="2025-12-16T12:47:09.102915526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:09.103192 kubelet[3656]: E1216 12:47:09.103150 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:47:09.103243 kubelet[3656]: E1216 12:47:09.103202 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:47:09.103506 kubelet[3656]: E1216 12:47:09.103315 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx7pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-knbcd_calico-system(50d00cc7-1203-4290-806c-1437385334b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:09.104664 kubelet[3656]: E1216 12:47:09.104628 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:47:10.567593 kubelet[3656]: E1216 12:47:10.567451 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:47:16.567883 containerd[2141]: time="2025-12-16T12:47:16.567598294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:16.863969 containerd[2141]: time="2025-12-16T12:47:16.863043419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:16.867474 containerd[2141]: time="2025-12-16T12:47:16.867300658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:16.867474 containerd[2141]: time="2025-12-16T12:47:16.867417141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:16.867684 kubelet[3656]: E1216 12:47:16.867632 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:16.867995 kubelet[3656]: E1216 12:47:16.867693 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:16.867995 kubelet[3656]: E1216 12:47:16.867806 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg9nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8447d995cc-x57ls_calico-apiserver(c9372ebb-481a-480c-8bf1-ba7918503e79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:16.869270 kubelet[3656]: E1216 12:47:16.869224 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:47:18.565613 kubelet[3656]: E1216 12:47:18.565554 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:47:20.566556 kubelet[3656]: E1216 12:47:20.566234 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:47:20.566964 containerd[2141]: time="2025-12-16T12:47:20.566361051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:47:20.794209 containerd[2141]: time="2025-12-16T12:47:20.794157322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:20.797206 containerd[2141]: time="2025-12-16T12:47:20.797166356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:47:20.797355 containerd[2141]: time="2025-12-16T12:47:20.797172572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:20.797465 kubelet[3656]: E1216 12:47:20.797408 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:20.797522 kubelet[3656]: E1216 12:47:20.797474 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:47:20.797593 kubelet[3656]: E1216 12:47:20.797564 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a24d6b5f1b4428ea6052e9fe2a904c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvdr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-fcd869d9b-bwx78_calico-system(7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:20.800222 containerd[2141]: time="2025-12-16T12:47:20.800160429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:47:21.059108 containerd[2141]: time="2025-12-16T12:47:21.058776905Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:21.062353 containerd[2141]: time="2025-12-16T12:47:21.062301138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:47:21.062779 containerd[2141]: time="2025-12-16T12:47:21.062400701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:21.063663 kubelet[3656]: E1216 12:47:21.062659 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:21.063663 kubelet[3656]: E1216 12:47:21.062708 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:47:21.063663 kubelet[3656]: E1216 12:47:21.062831 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvdr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-fcd869d9b-bwx78_calico-system(7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:21.063977 kubelet[3656]: E1216 12:47:21.063939 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:47:23.566618 kubelet[3656]: E1216 12:47:23.566263 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:47:23.567541 containerd[2141]: time="2025-12-16T12:47:23.567356997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:47:23.848757 containerd[2141]: time="2025-12-16T12:47:23.848623637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:23.851724 containerd[2141]: time="2025-12-16T12:47:23.851678352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:47:23.851808 containerd[2141]: time="2025-12-16T12:47:23.851765962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:23.851945 kubelet[3656]: E1216 12:47:23.851905 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:47:23.852005 kubelet[3656]: E1216 12:47:23.851955 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:47:23.852083 kubelet[3656]: E1216 12:47:23.852050 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mpmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:23.854590 containerd[2141]: time="2025-12-16T12:47:23.854548029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:47:24.118004 containerd[2141]: time="2025-12-16T12:47:24.117641315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:24.120299 containerd[2141]: time="2025-12-16T12:47:24.120246208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:47:24.120673 containerd[2141]: time="2025-12-16T12:47:24.120251184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:24.120744 kubelet[3656]: E1216 12:47:24.120560 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:47:24.120744 kubelet[3656]: E1216 12:47:24.120612 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:47:24.120886 kubelet[3656]: E1216 12:47:24.120735 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mpmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:24.121964 kubelet[3656]: E1216 12:47:24.121905 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:47:29.569107 kubelet[3656]: E1216 12:47:29.568116 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:47:31.566317 containerd[2141]: time="2025-12-16T12:47:31.566274255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:47:31.812900 containerd[2141]: time="2025-12-16T12:47:31.812846076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:31.817149 containerd[2141]: time="2025-12-16T12:47:31.816780632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:47:31.817149 containerd[2141]: time="2025-12-16T12:47:31.816885419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:31.817946 kubelet[3656]: E1216 12:47:31.817856 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:31.819360 kubelet[3656]: E1216 12:47:31.818126 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:47:31.819360 kubelet[3656]: E1216 12:47:31.819481 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcrwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c8c58856c-hs58v_calico-system(1c9e39e8-3a67-4975-af12-07644724165b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:31.820995 kubelet[3656]: E1216 12:47:31.820879 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:47:32.565492 containerd[2141]: time="2025-12-16T12:47:32.565448286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:47:32.800111 containerd[2141]: time="2025-12-16T12:47:32.799983032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:32.803429 containerd[2141]: time="2025-12-16T12:47:32.803293545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:47:32.803429 containerd[2141]: time="2025-12-16T12:47:32.803388900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:32.803756 kubelet[3656]: E1216 12:47:32.803707 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:47:32.803906 kubelet[3656]: E1216 12:47:32.803838 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:47:32.804585 kubelet[3656]: E1216 12:47:32.804514 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx7pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-knbcd_calico-system(50d00cc7-1203-4290-806c-1437385334b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:32.805719 kubelet[3656]: E1216 12:47:32.805689 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:47:34.566101 kubelet[3656]: E1216 12:47:34.566043 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:47:35.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.49:22-10.200.16.10:37576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:35.559989 systemd[1]: Started sshd@7-10.200.20.49:22-10.200.16.10:37576.service - OpenSSH per-connection server daemon (10.200.16.10:37576). Dec 16 12:47:35.562984 kernel: kauditd_printk_skb: 52 callbacks suppressed Dec 16 12:47:35.563113 kernel: audit: type=1130 audit(1765889255.558:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.49:22-10.200.16.10:37576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:36.018547 sshd[6015]: Accepted publickey for core from 10.200.16.10 port 37576 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:36.017000 audit[6015]: USER_ACCT pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.037861 sshd-session[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:36.035000 audit[6015]: CRED_ACQ pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.060770 kernel: audit: type=1101 audit(1765889256.017:768): pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.060884 kernel: audit: type=1103 audit(1765889256.035:769): pid=6015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.071791 kernel: audit: type=1006 audit(1765889256.035:770): pid=6015 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 16 12:47:36.035000 audit[6015]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcab976c0 a2=3 a3=0 items=0 ppid=1 pid=6015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:36.090067 kernel: audit: type=1300 audit(1765889256.035:770): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcab976c0 a2=3 a3=0 items=0 ppid=1 pid=6015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:36.035000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:36.097650 kernel: audit: type=1327 audit(1765889256.035:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:36.098422 systemd-logind[2108]: New session 10 of user core. Dec 16 12:47:36.101335 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:47:36.103000 audit[6015]: USER_START pid=6015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.123000 audit[6018]: CRED_ACQ pid=6018 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.141517 kernel: audit: type=1105 audit(1765889256.103:771): pid=6015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.141644 kernel: audit: type=1103 audit(1765889256.123:772): pid=6018 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.349177 sshd[6018]: Connection closed by 10.200.16.10 port 37576 Dec 16 12:47:36.350231 sshd-session[6015]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:36.350000 audit[6015]: USER_END pid=6015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.354659 systemd[1]: sshd@7-10.200.20.49:22-10.200.16.10:37576.service: Deactivated successfully. Dec 16 12:47:36.358404 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:47:36.350000 audit[6015]: CRED_DISP pid=6015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.389589 kernel: audit: type=1106 audit(1765889256.350:773): pid=6015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.389717 kernel: audit: type=1104 audit(1765889256.350:774): pid=6015 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:36.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.49:22-10.200.16.10:37576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:36.391169 systemd-logind[2108]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:47:36.392802 systemd-logind[2108]: Removed session 10. Dec 16 12:47:36.569003 kubelet[3656]: E1216 12:47:36.568540 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:47:38.566976 containerd[2141]: time="2025-12-16T12:47:38.566940516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:47:38.791228 containerd[2141]: time="2025-12-16T12:47:38.791174628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:47:38.794136 containerd[2141]: time="2025-12-16T12:47:38.794078777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:47:38.794227 containerd[2141]: time="2025-12-16T12:47:38.794193357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:47:38.794409 kubelet[3656]: E1216 12:47:38.794364 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:38.794982 kubelet[3656]: E1216 12:47:38.794764 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:47:38.794982 kubelet[3656]: E1216 12:47:38.794931 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8447d995cc-vpb8q_calico-apiserver(f4be582d-98bf-4dca-8981-8263274550a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:47:38.796387 kubelet[3656]: E1216 12:47:38.796346 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:47:41.438193 systemd[1]: Started sshd@8-10.200.20.49:22-10.200.16.10:32814.service - OpenSSH per-connection server daemon (10.200.16.10:32814). Dec 16 12:47:41.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.49:22-10.200.16.10:32814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:41.441604 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:47:41.441688 kernel: audit: type=1130 audit(1765889261.437:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.49:22-10.200.16.10:32814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:41.871000 audit[6032]: USER_ACCT pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:41.873981 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 32814 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:41.886000 audit[6032]: CRED_ACQ pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:41.888475 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:41.902154 kernel: audit: type=1101 audit(1765889261.871:777): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:41.902344 kernel: audit: type=1103 audit(1765889261.886:778): pid=6032 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:41.912305 kernel: audit: type=1006 audit(1765889261.886:779): pid=6032 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 12:47:41.886000 audit[6032]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdff9f240 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:41.919553 systemd-logind[2108]: New session 11 of user core. Dec 16 12:47:41.930566 kernel: audit: type=1300 audit(1765889261.886:779): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdff9f240 a2=3 a3=0 items=0 ppid=1 pid=6032 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:41.886000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:41.937748 kernel: audit: type=1327 audit(1765889261.886:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:41.938549 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:47:41.941000 audit[6032]: USER_START pid=6032 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:41.960000 audit[6035]: CRED_ACQ pid=6035 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:41.977261 kernel: audit: type=1105 audit(1765889261.941:780): pid=6032 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:41.977402 kernel: audit: type=1103 audit(1765889261.960:781): pid=6035 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:42.167936 sshd[6035]: Connection closed by 10.200.16.10 port 32814 Dec 16 12:47:42.168443 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:42.168000 audit[6032]: USER_END pid=6032 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:42.172549 systemd[1]: sshd@8-10.200.20.49:22-10.200.16.10:32814.service: Deactivated successfully. Dec 16 12:47:42.176944 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:47:42.168000 audit[6032]: CRED_DISP pid=6032 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:42.189156 systemd-logind[2108]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:47:42.190356 systemd-logind[2108]: Removed session 11. Dec 16 12:47:42.202106 kernel: audit: type=1106 audit(1765889262.168:782): pid=6032 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:42.202353 kernel: audit: type=1104 audit(1765889262.168:783): pid=6032 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:42.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.49:22-10.200.16.10:32814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:42.565709 kubelet[3656]: E1216 12:47:42.565659 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:47:44.566292 kubelet[3656]: E1216 12:47:44.566194 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:47:45.574571 kubelet[3656]: E1216 12:47:45.574501 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:47:46.565815 kubelet[3656]: E1216 12:47:46.565770 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:47:47.275811 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:47:47.275963 kernel: audit: type=1130 audit(1765889267.257:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.49:22-10.200.16.10:32830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:47.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.49:22-10.200.16.10:32830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:47.258304 systemd[1]: Started sshd@9-10.200.20.49:22-10.200.16.10:32830.service - OpenSSH per-connection server daemon (10.200.16.10:32830). Dec 16 12:47:47.568742 kubelet[3656]: E1216 12:47:47.568335 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:47:47.715000 audit[6050]: USER_ACCT pid=6050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:47.764108 sshd[6050]: Accepted publickey for core from 10.200.16.10 port 32830 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:47.766179 kernel: audit: type=1101 audit(1765889267.715:786): pid=6050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:47.765000 audit[6050]: CRED_ACQ pid=6050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:47.774254 systemd-logind[2108]: New session 12 of user core. Dec 16 12:47:47.766687 sshd-session[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:47.789877 kernel: audit: type=1103 audit(1765889267.765:787): pid=6050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:47.790031 kernel: audit: type=1006 audit(1765889267.765:788): pid=6050 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 12:47:47.765000 audit[6050]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd53a850 a2=3 a3=0 items=0 ppid=1 pid=6050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:47.792411 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:47:47.808022 kernel: audit: type=1300 audit(1765889267.765:788): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd53a850 a2=3 a3=0 items=0 ppid=1 pid=6050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:47.765000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:47.815442 kernel: audit: type=1327 audit(1765889267.765:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:47.814000 audit[6050]: USER_START pid=6050 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:47.833088 kernel: audit: type=1105 audit(1765889267.814:789): pid=6050 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:47.817000 audit[6053]: CRED_ACQ pid=6053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:47.849902 kernel: audit: type=1103 audit(1765889267.817:790): pid=6053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.064256 sshd[6053]: Connection closed by 10.200.16.10 port 32830 Dec 16 12:47:48.065362 sshd-session[6050]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:48.066000 audit[6050]: USER_END pid=6050 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.070586 systemd-logind[2108]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:47:48.073252 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:47:48.076333 systemd[1]: sshd@9-10.200.20.49:22-10.200.16.10:32830.service: Deactivated successfully. Dec 16 12:47:48.080642 systemd-logind[2108]: Removed session 12. Dec 16 12:47:48.066000 audit[6050]: CRED_DISP pid=6050 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.100718 kernel: audit: type=1106 audit(1765889268.066:791): pid=6050 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.100852 kernel: audit: type=1104 audit(1765889268.066:792): pid=6050 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.49:22-10.200.16.10:32830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:48.146964 systemd[1]: Started sshd@10-10.200.20.49:22-10.200.16.10:32844.service - OpenSSH per-connection server daemon (10.200.16.10:32844). Dec 16 12:47:48.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.49:22-10.200.16.10:32844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:48.537000 audit[6066]: USER_ACCT pid=6066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.538645 sshd[6066]: Accepted publickey for core from 10.200.16.10 port 32844 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:48.539000 audit[6066]: CRED_ACQ pid=6066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.539000 audit[6066]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1c09e60 a2=3 a3=0 items=0 ppid=1 pid=6066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:48.539000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:48.541361 sshd-session[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:48.550008 systemd-logind[2108]: New session 13 of user core. Dec 16 12:47:48.556847 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:47:48.559000 audit[6066]: USER_START pid=6066 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.562000 audit[6069]: CRED_ACQ pid=6069 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.830526 sshd[6069]: Connection closed by 10.200.16.10 port 32844 Dec 16 12:47:48.831348 sshd-session[6066]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:48.832000 audit[6066]: USER_END pid=6066 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.832000 audit[6066]: CRED_DISP pid=6066 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:48.837054 systemd[1]: sshd@10-10.200.20.49:22-10.200.16.10:32844.service: Deactivated successfully. Dec 16 12:47:48.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.49:22-10.200.16.10:32844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:48.842229 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:47:48.845392 systemd-logind[2108]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:47:48.849355 systemd-logind[2108]: Removed session 13. Dec 16 12:47:48.912145 systemd[1]: Started sshd@11-10.200.20.49:22-10.200.16.10:32854.service - OpenSSH per-connection server daemon (10.200.16.10:32854). Dec 16 12:47:48.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.49:22-10.200.16.10:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:49.319000 audit[6079]: USER_ACCT pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.322229 sshd[6079]: Accepted publickey for core from 10.200.16.10 port 32854 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:49.321000 audit[6079]: CRED_ACQ pid=6079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.322000 audit[6079]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffec4fbef0 a2=3 a3=0 items=0 ppid=1 pid=6079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:49.322000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:49.323633 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:49.330754 systemd-logind[2108]: New session 14 of user core. Dec 16 12:47:49.337310 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:47:49.339000 audit[6079]: USER_START pid=6079 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.341000 audit[6082]: CRED_ACQ pid=6082 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.568737 kubelet[3656]: E1216 12:47:49.568363 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:47:49.585279 sshd[6082]: Connection closed by 10.200.16.10 port 32854 Dec 16 12:47:49.584930 sshd-session[6079]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:49.587000 audit[6079]: USER_END pid=6079 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.588000 audit[6079]: CRED_DISP pid=6079 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:49.592784 systemd[1]: sshd@11-10.200.20.49:22-10.200.16.10:32854.service: Deactivated successfully. Dec 16 12:47:49.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.49:22-10.200.16.10:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:49.595505 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:47:49.596466 systemd-logind[2108]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:47:49.598096 systemd-logind[2108]: Removed session 14. Dec 16 12:47:54.564542 kubelet[3656]: E1216 12:47:54.564481 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:47:54.678278 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 12:47:54.678404 kernel: audit: type=1130 audit(1765889274.663:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.49:22-10.200.16.10:35416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:54.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.49:22-10.200.16.10:35416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:54.664885 systemd[1]: Started sshd@12-10.200.20.49:22-10.200.16.10:35416.service - OpenSSH per-connection server daemon (10.200.16.10:35416). Dec 16 12:47:55.073000 audit[6100]: USER_ACCT pid=6100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.078221 sshd[6100]: Accepted publickey for core from 10.200.16.10 port 35416 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:47:55.089000 audit[6100]: CRED_ACQ pid=6100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.090565 sshd-session[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:47:55.097000 systemd-logind[2108]: New session 15 of user core. Dec 16 12:47:55.105369 kernel: audit: type=1101 audit(1765889275.073:813): pid=6100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.105491 kernel: audit: type=1103 audit(1765889275.089:814): pid=6100 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.115590 kernel: audit: type=1006 audit(1765889275.089:815): pid=6100 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:47:55.089000 audit[6100]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7b96e10 a2=3 a3=0 items=0 ppid=1 pid=6100 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:55.132119 kernel: audit: type=1300 audit(1765889275.089:815): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7b96e10 a2=3 a3=0 items=0 ppid=1 pid=6100 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:47:55.089000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:55.133461 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:47:55.139373 kernel: audit: type=1327 audit(1765889275.089:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:47:55.139000 audit[6100]: USER_START pid=6100 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.157000 audit[6103]: CRED_ACQ pid=6103 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.172072 kernel: audit: type=1105 audit(1765889275.139:816): pid=6100 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.172280 kernel: audit: type=1103 audit(1765889275.157:817): pid=6103 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.358448 sshd[6103]: Connection closed by 10.200.16.10 port 35416 Dec 16 12:47:55.360354 sshd-session[6100]: pam_unix(sshd:session): session closed for user core Dec 16 12:47:55.362000 audit[6100]: USER_END pid=6100 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.368124 systemd[1]: sshd@12-10.200.20.49:22-10.200.16.10:35416.service: Deactivated successfully. Dec 16 12:47:55.372212 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:47:55.374943 systemd-logind[2108]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:47:55.380472 systemd-logind[2108]: Removed session 15. Dec 16 12:47:55.362000 audit[6100]: CRED_DISP pid=6100 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.402524 kernel: audit: type=1106 audit(1765889275.362:818): pid=6100 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.402657 kernel: audit: type=1104 audit(1765889275.362:819): pid=6100 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:47:55.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.49:22-10.200.16.10:35416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:47:58.566324 kubelet[3656]: E1216 12:47:58.566275 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:47:59.569184 kubelet[3656]: E1216 12:47:59.567334 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:47:59.569775 kubelet[3656]: E1216 12:47:59.569743 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:47:59.570638 kubelet[3656]: E1216 12:47:59.570591 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:48:00.448354 systemd[1]: Started sshd@13-10.200.20.49:22-10.200.16.10:54012.service - OpenSSH per-connection server daemon (10.200.16.10:54012). Dec 16 12:48:00.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.49:22-10.200.16.10:54012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:00.451474 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:00.451549 kernel: audit: type=1130 audit(1765889280.447:821): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.49:22-10.200.16.10:54012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:00.857000 audit[6117]: USER_ACCT pid=6117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:00.876413 sshd[6117]: Accepted publickey for core from 10.200.16.10 port 54012 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:00.876000 audit[6117]: CRED_ACQ pid=6117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:00.892739 kernel: audit: type=1101 audit(1765889280.857:822): pid=6117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:00.892842 kernel: audit: type=1103 audit(1765889280.876:823): pid=6117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:00.902349 kernel: audit: type=1006 audit(1765889280.876:824): pid=6117 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:48:00.876000 audit[6117]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeddf2e10 a2=3 a3=0 items=0 ppid=1 pid=6117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:00.919685 kernel: audit: type=1300 audit(1765889280.876:824): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeddf2e10 a2=3 a3=0 items=0 ppid=1 pid=6117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:00.876000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:00.926845 kernel: audit: type=1327 audit(1765889280.876:824): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:00.971877 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:00.977935 systemd-logind[2108]: New session 16 of user core. Dec 16 12:48:00.983252 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:48:00.985000 audit[6117]: USER_START pid=6117 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:00.986000 audit[6120]: CRED_ACQ pid=6120 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.017940 kernel: audit: type=1105 audit(1765889280.985:825): pid=6117 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.018071 kernel: audit: type=1103 audit(1765889280.986:826): pid=6120 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.177738 sshd[6120]: Connection closed by 10.200.16.10 port 54012 Dec 16 12:48:01.178722 sshd-session[6117]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:01.180000 audit[6117]: USER_END pid=6117 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.184875 systemd[1]: sshd@13-10.200.20.49:22-10.200.16.10:54012.service: Deactivated successfully. Dec 16 12:48:01.187064 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:48:01.180000 audit[6117]: CRED_DISP pid=6117 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.218430 kernel: audit: type=1106 audit(1765889281.180:827): pid=6117 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.218577 kernel: audit: type=1104 audit(1765889281.180:828): pid=6117 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:01.218774 systemd-logind[2108]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:48:01.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.49:22-10.200.16.10:54012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:01.222470 systemd-logind[2108]: Removed session 16. Dec 16 12:48:03.566717 kubelet[3656]: E1216 12:48:03.566169 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:48:06.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.49:22-10.200.16.10:54014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:06.270635 systemd[1]: Started sshd@14-10.200.20.49:22-10.200.16.10:54014.service - OpenSSH per-connection server daemon (10.200.16.10:54014). Dec 16 12:48:06.274220 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:06.274320 kernel: audit: type=1130 audit(1765889286.269:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.49:22-10.200.16.10:54014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:06.719099 sshd[6156]: Accepted publickey for core from 10.200.16.10 port 54014 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:06.717000 audit[6156]: USER_ACCT pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:06.736517 sshd-session[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:06.735000 audit[6156]: CRED_ACQ pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:06.752918 kernel: audit: type=1101 audit(1765889286.717:831): pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:06.753056 kernel: audit: type=1103 audit(1765889286.735:832): pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:06.762868 kernel: audit: type=1006 audit(1765889286.735:833): pid=6156 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:48:06.735000 audit[6156]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb28b0b0 a2=3 a3=0 items=0 ppid=1 pid=6156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:06.769315 systemd-logind[2108]: New session 17 of user core. Dec 16 12:48:06.782024 kernel: audit: type=1300 audit(1765889286.735:833): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb28b0b0 a2=3 a3=0 items=0 ppid=1 pid=6156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:06.735000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:06.789029 kernel: audit: type=1327 audit(1765889286.735:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:06.790369 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:48:06.793000 audit[6156]: USER_START pid=6156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:06.815204 kernel: audit: type=1105 audit(1765889286.793:834): pid=6156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:06.815000 audit[6159]: CRED_ACQ pid=6159 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:06.832128 kernel: audit: type=1103 audit(1765889286.815:835): pid=6159 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.045643 sshd[6159]: Connection closed by 10.200.16.10 port 54014 Dec 16 12:48:07.046983 sshd-session[6156]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:07.048000 audit[6156]: USER_END pid=6156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.052227 systemd-logind[2108]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:48:07.054137 systemd[1]: sshd@14-10.200.20.49:22-10.200.16.10:54014.service: Deactivated successfully. Dec 16 12:48:07.059107 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:48:07.062954 systemd-logind[2108]: Removed session 17. Dec 16 12:48:07.048000 audit[6156]: CRED_DISP pid=6156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.085972 kernel: audit: type=1106 audit(1765889287.048:836): pid=6156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.086148 kernel: audit: type=1104 audit(1765889287.048:837): pid=6156 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:07.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.49:22-10.200.16.10:54014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:07.566431 containerd[2141]: time="2025-12-16T12:48:07.566168878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:48:07.823898 containerd[2141]: time="2025-12-16T12:48:07.823705823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:07.826876 containerd[2141]: time="2025-12-16T12:48:07.826817997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:48:07.827042 containerd[2141]: time="2025-12-16T12:48:07.826851157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:07.827136 kubelet[3656]: E1216 12:48:07.827076 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:48:07.827427 kubelet[3656]: E1216 12:48:07.827148 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:48:07.827427 kubelet[3656]: E1216 12:48:07.827262 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg9nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8447d995cc-x57ls_calico-apiserver(c9372ebb-481a-480c-8bf1-ba7918503e79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:07.828742 kubelet[3656]: E1216 12:48:07.828706 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:48:10.565960 containerd[2141]: time="2025-12-16T12:48:10.565140512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:48:10.804108 containerd[2141]: time="2025-12-16T12:48:10.804037688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:10.806865 containerd[2141]: time="2025-12-16T12:48:10.806808448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:48:10.807279 containerd[2141]: time="2025-12-16T12:48:10.806905643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:10.807521 kubelet[3656]: E1216 12:48:10.807137 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:48:10.807521 kubelet[3656]: E1216 12:48:10.807201 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:48:10.807521 kubelet[3656]: E1216 12:48:10.807313 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a24d6b5f1b4428ea6052e9fe2a904c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvdr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-fcd869d9b-bwx78_calico-system(7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:10.810118 containerd[2141]: time="2025-12-16T12:48:10.810018166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:48:11.082950 containerd[2141]: time="2025-12-16T12:48:11.082726352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:11.086413 containerd[2141]: time="2025-12-16T12:48:11.086295504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:48:11.086600 containerd[2141]: time="2025-12-16T12:48:11.086278216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:11.086687 kubelet[3656]: E1216 12:48:11.086561 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:48:11.086687 kubelet[3656]: E1216 12:48:11.086611 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:48:11.086771 kubelet[3656]: E1216 12:48:11.086725 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvdr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-fcd869d9b-bwx78_calico-system(7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:11.088220 kubelet[3656]: E1216 12:48:11.088174 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:48:11.565954 kubelet[3656]: E1216 12:48:11.565869 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:48:11.567775 containerd[2141]: time="2025-12-16T12:48:11.567737610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:48:11.803604 containerd[2141]: time="2025-12-16T12:48:11.803548424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:11.806638 containerd[2141]: time="2025-12-16T12:48:11.806595105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:48:11.806777 containerd[2141]: time="2025-12-16T12:48:11.806611146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:11.806880 kubelet[3656]: E1216 12:48:11.806835 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:48:11.806940 kubelet[3656]: E1216 12:48:11.806891 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:48:11.807033 kubelet[3656]: E1216 12:48:11.806998 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mpmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:11.809553 containerd[2141]: time="2025-12-16T12:48:11.809482533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:48:12.058639 containerd[2141]: time="2025-12-16T12:48:12.058577799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:12.062708 containerd[2141]: time="2025-12-16T12:48:12.062653557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:48:12.062821 containerd[2141]: time="2025-12-16T12:48:12.062654149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:12.062979 kubelet[3656]: E1216 12:48:12.062921 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:48:12.063431 kubelet[3656]: E1216 12:48:12.062980 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:48:12.063431 kubelet[3656]: E1216 12:48:12.063070 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mpmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-q9ldj_calico-system(c8d2f0f9-d4bf-424e-80b4-888570287c6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:12.064469 kubelet[3656]: E1216 12:48:12.064416 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:48:12.135970 systemd[1]: Started sshd@15-10.200.20.49:22-10.200.16.10:55854.service - OpenSSH per-connection server daemon (10.200.16.10:55854). Dec 16 12:48:12.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.49:22-10.200.16.10:55854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:12.139963 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:12.140036 kernel: audit: type=1130 audit(1765889292.135:839): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.49:22-10.200.16.10:55854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:12.574000 audit[6180]: USER_ACCT pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.592117 sshd-session[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:12.593849 sshd[6180]: Accepted publickey for core from 10.200.16.10 port 55854 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:12.590000 audit[6180]: CRED_ACQ pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.594114 kernel: audit: type=1101 audit(1765889292.574:840): pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.622119 kernel: audit: type=1103 audit(1765889292.590:841): pid=6180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.622226 kernel: audit: type=1006 audit(1765889292.590:842): pid=6180 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 12:48:12.616446 systemd-logind[2108]: New session 18 of user core. Dec 16 12:48:12.590000 audit[6180]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb95a8c0 a2=3 a3=0 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:12.623377 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:48:12.590000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:12.646666 kernel: audit: type=1300 audit(1765889292.590:842): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb95a8c0 a2=3 a3=0 items=0 ppid=1 pid=6180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:12.646790 kernel: audit: type=1327 audit(1765889292.590:842): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:12.640000 audit[6180]: USER_START pid=6180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.643000 audit[6183]: CRED_ACQ pid=6183 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.686856 kernel: audit: type=1105 audit(1765889292.640:843): pid=6180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.686944 kernel: audit: type=1103 audit(1765889292.643:844): pid=6183 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.887991 sshd[6183]: Connection closed by 10.200.16.10 port 55854 Dec 16 12:48:12.890308 sshd-session[6180]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:12.890000 audit[6180]: USER_END pid=6180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.911102 kernel: audit: type=1106 audit(1765889292.890:845): pid=6180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.906000 audit[6180]: CRED_DISP pid=6180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.911647 systemd[1]: sshd@15-10.200.20.49:22-10.200.16.10:55854.service: Deactivated successfully. Dec 16 12:48:12.917019 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:48:12.923630 systemd-logind[2108]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:48:12.925375 systemd-logind[2108]: Removed session 18. Dec 16 12:48:12.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.49:22-10.200.16.10:55854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:12.929126 kernel: audit: type=1104 audit(1765889292.906:846): pid=6180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:12.990327 systemd[1]: Started sshd@16-10.200.20.49:22-10.200.16.10:55864.service - OpenSSH per-connection server daemon (10.200.16.10:55864). Dec 16 12:48:12.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.49:22-10.200.16.10:55864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:13.419000 audit[6195]: USER_ACCT pid=6195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:13.422030 sshd[6195]: Accepted publickey for core from 10.200.16.10 port 55864 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:13.422000 audit[6195]: CRED_ACQ pid=6195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:13.422000 audit[6195]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc32322a0 a2=3 a3=0 items=0 ppid=1 pid=6195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:13.422000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:13.424830 sshd-session[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:13.432249 systemd-logind[2108]: New session 19 of user core. Dec 16 12:48:13.438310 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:48:13.441000 audit[6195]: USER_START pid=6195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:13.444000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:13.567952 containerd[2141]: time="2025-12-16T12:48:13.567321171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:48:13.818531 containerd[2141]: time="2025-12-16T12:48:13.818469961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:13.822005 sshd[6198]: Connection closed by 10.200.16.10 port 55864 Dec 16 12:48:13.822698 containerd[2141]: time="2025-12-16T12:48:13.822118931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:48:13.822698 containerd[2141]: time="2025-12-16T12:48:13.822604817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:13.822512 sshd-session[6195]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:13.823099 kubelet[3656]: E1216 12:48:13.823015 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:48:13.823099 kubelet[3656]: E1216 12:48:13.823070 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:48:13.823613 kubelet[3656]: E1216 12:48:13.823513 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcrwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c8c58856c-hs58v_calico-system(1c9e39e8-3a67-4975-af12-07644724165b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:13.823000 audit[6195]: USER_END pid=6195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:13.824000 audit[6195]: CRED_DISP pid=6195 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:13.825943 kubelet[3656]: E1216 12:48:13.825151 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:48:13.828213 systemd[1]: sshd@16-10.200.20.49:22-10.200.16.10:55864.service: Deactivated successfully. Dec 16 12:48:13.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.49:22-10.200.16.10:55864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:13.830955 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:48:13.833286 systemd-logind[2108]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:48:13.834170 systemd-logind[2108]: Removed session 19. Dec 16 12:48:13.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.49:22-10.200.16.10:55876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:13.912035 systemd[1]: Started sshd@17-10.200.20.49:22-10.200.16.10:55876.service - OpenSSH per-connection server daemon (10.200.16.10:55876). Dec 16 12:48:14.300000 audit[6208]: USER_ACCT pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.302881 sshd[6208]: Accepted publickey for core from 10.200.16.10 port 55876 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:14.302000 audit[6208]: CRED_ACQ pid=6208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.305569 sshd-session[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:14.304000 audit[6208]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee945520 a2=3 a3=0 items=0 ppid=1 pid=6208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:14.304000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:14.312144 systemd-logind[2108]: New session 20 of user core. Dec 16 12:48:14.316311 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:48:14.318000 audit[6208]: USER_START pid=6208 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:14.321000 audit[6211]: CRED_ACQ pid=6211 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:15.115000 audit[6224]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=6224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:15.115000 audit[6224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd1546bf0 a2=0 a3=1 items=0 ppid=3794 pid=6224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:15.115000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:15.120000 audit[6224]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=6224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:15.120000 audit[6224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd1546bf0 a2=0 a3=1 items=0 ppid=3794 pid=6224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:15.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:15.147000 audit[6226]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=6226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:15.147000 audit[6226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc3375bc0 a2=0 a3=1 items=0 ppid=3794 pid=6226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:15.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:15.153000 audit[6226]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=6226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:15.153000 audit[6226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc3375bc0 a2=0 a3=1 items=0 ppid=3794 pid=6226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:15.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:15.215577 sshd[6211]: Connection closed by 10.200.16.10 port 55876 Dec 16 12:48:15.216313 sshd-session[6208]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:15.218000 audit[6208]: USER_END pid=6208 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:15.218000 audit[6208]: CRED_DISP pid=6208 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:15.222296 systemd[1]: sshd@17-10.200.20.49:22-10.200.16.10:55876.service: Deactivated successfully. Dec 16 12:48:15.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.49:22-10.200.16.10:55876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:15.226610 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:48:15.228389 systemd-logind[2108]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:48:15.230070 systemd-logind[2108]: Removed session 20. Dec 16 12:48:15.313237 systemd[1]: Started sshd@18-10.200.20.49:22-10.200.16.10:55886.service - OpenSSH per-connection server daemon (10.200.16.10:55886). Dec 16 12:48:15.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.49:22-10.200.16.10:55886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:15.713000 audit[6231]: USER_ACCT pid=6231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:15.715020 sshd[6231]: Accepted publickey for core from 10.200.16.10 port 55886 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:15.714000 audit[6231]: CRED_ACQ pid=6231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:15.714000 audit[6231]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcd706460 a2=3 a3=0 items=0 ppid=1 pid=6231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:15.714000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:15.716369 sshd-session[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:15.721794 systemd-logind[2108]: New session 21 of user core. Dec 16 12:48:15.730379 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:48:15.732000 audit[6231]: USER_START pid=6231 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:15.734000 audit[6234]: CRED_ACQ pid=6234 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.140363 sshd[6234]: Connection closed by 10.200.16.10 port 55886 Dec 16 12:48:16.139395 sshd-session[6231]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:16.140000 audit[6231]: USER_END pid=6231 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.140000 audit[6231]: CRED_DISP pid=6231 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.143615 systemd[1]: sshd@18-10.200.20.49:22-10.200.16.10:55886.service: Deactivated successfully. Dec 16 12:48:16.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.49:22-10.200.16.10:55886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:16.147322 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:48:16.151222 systemd-logind[2108]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:48:16.153047 systemd-logind[2108]: Removed session 21. Dec 16 12:48:16.230585 systemd[1]: Started sshd@19-10.200.20.49:22-10.200.16.10:55902.service - OpenSSH per-connection server daemon (10.200.16.10:55902). Dec 16 12:48:16.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.49:22-10.200.16.10:55902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:16.565747 kubelet[3656]: E1216 12:48:16.565666 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:48:16.668000 audit[6243]: USER_ACCT pid=6243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.669651 sshd[6243]: Accepted publickey for core from 10.200.16.10 port 55902 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:16.669000 audit[6243]: CRED_ACQ pid=6243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.669000 audit[6243]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1802000 a2=3 a3=0 items=0 ppid=1 pid=6243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:16.669000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:16.671514 sshd-session[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:16.679510 systemd-logind[2108]: New session 22 of user core. Dec 16 12:48:16.685390 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:48:16.687000 audit[6243]: USER_START pid=6243 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.690000 audit[6246]: CRED_ACQ pid=6246 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.952843 sshd[6246]: Connection closed by 10.200.16.10 port 55902 Dec 16 12:48:16.954839 sshd-session[6243]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:16.956000 audit[6243]: USER_END pid=6243 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.956000 audit[6243]: CRED_DISP pid=6243 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:16.961363 systemd[1]: sshd@19-10.200.20.49:22-10.200.16.10:55902.service: Deactivated successfully. Dec 16 12:48:16.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.49:22-10.200.16.10:55902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:16.963761 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:48:16.967849 systemd-logind[2108]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:48:16.969077 systemd-logind[2108]: Removed session 22. Dec 16 12:48:19.567561 kubelet[3656]: E1216 12:48:19.567499 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:48:20.066000 audit[6272]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=6272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:20.071122 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:48:20.071202 kernel: audit: type=1325 audit(1765889300.066:888): table=filter:146 family=2 entries=26 op=nft_register_rule pid=6272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:20.066000 audit[6272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9536720 a2=0 a3=1 items=0 ppid=3794 pid=6272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:20.098409 kernel: audit: type=1300 audit(1765889300.066:888): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc9536720 a2=0 a3=1 items=0 ppid=3794 pid=6272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:20.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:20.109189 kernel: audit: type=1327 audit(1765889300.066:888): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:20.109327 kernel: audit: type=1325 audit(1765889300.080:889): table=nat:147 family=2 entries=104 op=nft_register_chain pid=6272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:20.080000 audit[6272]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=6272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:48:20.080000 audit[6272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc9536720 a2=0 a3=1 items=0 ppid=3794 pid=6272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:20.140042 kernel: audit: type=1300 audit(1765889300.080:889): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffc9536720 a2=0 a3=1 items=0 ppid=3794 pid=6272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:20.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:20.149715 kernel: audit: type=1327 audit(1765889300.080:889): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:48:22.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.49:22-10.200.16.10:39220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:22.044761 systemd[1]: Started sshd@20-10.200.20.49:22-10.200.16.10:39220.service - OpenSSH per-connection server daemon (10.200.16.10:39220). Dec 16 12:48:22.063107 kernel: audit: type=1130 audit(1765889302.043:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.49:22-10.200.16.10:39220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:22.492000 audit[6274]: USER_ACCT pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.510252 sshd[6274]: Accepted publickey for core from 10.200.16.10 port 39220 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:22.512426 sshd-session[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:22.509000 audit[6274]: CRED_ACQ pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.513154 kernel: audit: type=1101 audit(1765889302.492:891): pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.540693 kernel: audit: type=1103 audit(1765889302.509:892): pid=6274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.540833 kernel: audit: type=1006 audit(1765889302.510:893): pid=6274 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:48:22.544450 systemd-logind[2108]: New session 23 of user core. Dec 16 12:48:22.510000 audit[6274]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6990480 a2=3 a3=0 items=0 ppid=1 pid=6274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:22.510000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:22.547299 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:48:22.549000 audit[6274]: USER_START pid=6274 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.551000 audit[6277]: CRED_ACQ pid=6277 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.565976 containerd[2141]: time="2025-12-16T12:48:22.565931929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:48:22.793018 sshd[6277]: Connection closed by 10.200.16.10 port 39220 Dec 16 12:48:22.793795 sshd-session[6274]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:22.796000 audit[6274]: USER_END pid=6274 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.796000 audit[6274]: CRED_DISP pid=6274 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:22.800556 systemd[1]: sshd@20-10.200.20.49:22-10.200.16.10:39220.service: Deactivated successfully. Dec 16 12:48:22.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.49:22-10.200.16.10:39220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:22.801259 systemd-logind[2108]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:48:22.804506 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:48:22.806186 systemd-logind[2108]: Removed session 23. Dec 16 12:48:22.827126 containerd[2141]: time="2025-12-16T12:48:22.826568349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:22.850788 containerd[2141]: time="2025-12-16T12:48:22.850611263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:22.850788 containerd[2141]: time="2025-12-16T12:48:22.850663144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:48:22.850986 kubelet[3656]: E1216 12:48:22.850935 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:48:22.851275 kubelet[3656]: E1216 12:48:22.851000 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:48:22.851747 kubelet[3656]: E1216 12:48:22.851693 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hx7pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-knbcd_calico-system(50d00cc7-1203-4290-806c-1437385334b5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:22.852907 kubelet[3656]: E1216 12:48:22.852875 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:48:23.568646 kubelet[3656]: E1216 12:48:23.568466 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:48:24.566991 kubelet[3656]: E1216 12:48:24.566912 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:48:27.887199 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:48:27.887334 kernel: audit: type=1130 audit(1765889307.882:899): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.49:22-10.200.16.10:39234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:27.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.49:22-10.200.16.10:39234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:27.882781 systemd[1]: Started sshd@21-10.200.20.49:22-10.200.16.10:39234.service - OpenSSH per-connection server daemon (10.200.16.10:39234). Dec 16 12:48:28.328000 audit[6296]: USER_ACCT pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.328347 sshd[6296]: Accepted publickey for core from 10.200.16.10 port 39234 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:28.348776 sshd-session[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:28.348000 audit[6296]: CRED_ACQ pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.367579 kernel: audit: type=1101 audit(1765889308.328:900): pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.367724 kernel: audit: type=1103 audit(1765889308.348:901): pid=6296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.380678 kernel: audit: type=1006 audit(1765889308.348:902): pid=6296 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:48:28.348000 audit[6296]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff86d5ba0 a2=3 a3=0 items=0 ppid=1 pid=6296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:28.401131 kernel: audit: type=1300 audit(1765889308.348:902): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff86d5ba0 a2=3 a3=0 items=0 ppid=1 pid=6296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:28.403622 systemd-logind[2108]: New session 24 of user core. Dec 16 12:48:28.348000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:28.412202 kernel: audit: type=1327 audit(1765889308.348:902): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:28.416339 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:48:28.419000 audit[6296]: USER_START pid=6296 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.421000 audit[6299]: CRED_ACQ pid=6299 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.461964 kernel: audit: type=1105 audit(1765889308.419:903): pid=6296 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.462125 kernel: audit: type=1103 audit(1765889308.421:904): pid=6299 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.565273 containerd[2141]: time="2025-12-16T12:48:28.565020359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:48:28.630589 sshd[6299]: Connection closed by 10.200.16.10 port 39234 Dec 16 12:48:28.630408 sshd-session[6296]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:28.632000 audit[6296]: USER_END pid=6296 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.658875 systemd-logind[2108]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:48:28.659562 systemd[1]: sshd@21-10.200.20.49:22-10.200.16.10:39234.service: Deactivated successfully. Dec 16 12:48:28.632000 audit[6296]: CRED_DISP pid=6296 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.663351 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:48:28.677143 kernel: audit: type=1106 audit(1765889308.632:905): pid=6296 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.677291 kernel: audit: type=1104 audit(1765889308.632:906): pid=6296 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:28.681539 systemd-logind[2108]: Removed session 24. Dec 16 12:48:28.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.49:22-10.200.16.10:39234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:28.849130 containerd[2141]: time="2025-12-16T12:48:28.849068474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:48:28.852668 containerd[2141]: time="2025-12-16T12:48:28.852600790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:48:28.852668 containerd[2141]: time="2025-12-16T12:48:28.852632086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:48:28.853782 kubelet[3656]: E1216 12:48:28.853735 3656 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:48:28.854205 kubelet[3656]: E1216 12:48:28.853798 3656 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:48:28.854205 kubelet[3656]: E1216 12:48:28.853915 3656 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8447d995cc-vpb8q_calico-apiserver(f4be582d-98bf-4dca-8981-8263274550a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:48:28.856231 kubelet[3656]: E1216 12:48:28.856179 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:48:29.565497 kubelet[3656]: E1216 12:48:29.565455 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:48:30.565536 kubelet[3656]: E1216 12:48:30.565230 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:48:33.731127 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:33.731278 kernel: audit: type=1130 audit(1765889313.720:908): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.49:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:33.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.49:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:33.721241 systemd[1]: Started sshd@22-10.200.20.49:22-10.200.16.10:35376.service - OpenSSH per-connection server daemon (10.200.16.10:35376). Dec 16 12:48:34.169000 audit[6339]: USER_ACCT pid=6339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.179699 sshd[6339]: Accepted publickey for core from 10.200.16.10 port 35376 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:34.192241 kernel: audit: type=1101 audit(1765889314.169:909): pid=6339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.192384 kernel: audit: type=1103 audit(1765889314.190:910): pid=6339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.190000 audit[6339]: CRED_ACQ pid=6339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.191657 sshd-session[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:34.219365 systemd-logind[2108]: New session 25 of user core. Dec 16 12:48:34.220125 kernel: audit: type=1006 audit(1765889314.190:911): pid=6339 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:48:34.190000 audit[6339]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7d2d020 a2=3 a3=0 items=0 ppid=1 pid=6339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:34.240286 kernel: audit: type=1300 audit(1765889314.190:911): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7d2d020 a2=3 a3=0 items=0 ppid=1 pid=6339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:34.190000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:34.248477 kernel: audit: type=1327 audit(1765889314.190:911): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:34.249445 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:48:34.253000 audit[6339]: USER_START pid=6339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.255000 audit[6342]: CRED_ACQ pid=6342 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.292292 kernel: audit: type=1105 audit(1765889314.253:912): pid=6339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.292442 kernel: audit: type=1103 audit(1765889314.255:913): pid=6342 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.477298 sshd[6342]: Connection closed by 10.200.16.10 port 35376 Dec 16 12:48:34.478126 sshd-session[6339]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:34.478000 audit[6339]: USER_END pid=6339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.481953 systemd-logind[2108]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:48:34.484118 systemd[1]: sshd@22-10.200.20.49:22-10.200.16.10:35376.service: Deactivated successfully. Dec 16 12:48:34.487972 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:48:34.490504 systemd-logind[2108]: Removed session 25. Dec 16 12:48:34.478000 audit[6339]: CRED_DISP pid=6339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.515243 kernel: audit: type=1106 audit(1765889314.478:914): pid=6339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.515390 kernel: audit: type=1104 audit(1765889314.478:915): pid=6339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:34.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.49:22-10.200.16.10:35376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:34.565592 kubelet[3656]: E1216 12:48:34.565545 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:48:35.568399 kubelet[3656]: E1216 12:48:35.568344 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21" Dec 16 12:48:38.566671 kubelet[3656]: E1216 12:48:38.566611 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-q9ldj" podUID="c8d2f0f9-d4bf-424e-80b4-888570287c6a" Dec 16 12:48:39.574016 systemd[1]: Started sshd@23-10.200.20.49:22-10.200.16.10:35390.service - OpenSSH per-connection server daemon (10.200.16.10:35390). Dec 16 12:48:39.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.49:22-10.200.16.10:35390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:39.578046 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:39.578292 kernel: audit: type=1130 audit(1765889319.573:917): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.49:22-10.200.16.10:35390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:40.023000 audit[6354]: USER_ACCT pid=6354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.041853 sshd[6354]: Accepted publickey for core from 10.200.16.10 port 35390 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:40.049475 sshd-session[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:40.045000 audit[6354]: CRED_ACQ pid=6354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.071130 kernel: audit: type=1101 audit(1765889320.023:918): pid=6354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.071255 kernel: audit: type=1103 audit(1765889320.045:919): pid=6354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.055891 systemd-logind[2108]: New session 26 of user core. Dec 16 12:48:40.073822 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:48:40.087547 kernel: audit: type=1006 audit(1765889320.045:920): pid=6354 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 12:48:40.045000 audit[6354]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe18a4e40 a2=3 a3=0 items=0 ppid=1 pid=6354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:40.112042 kernel: audit: type=1300 audit(1765889320.045:920): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe18a4e40 a2=3 a3=0 items=0 ppid=1 pid=6354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:40.045000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:40.123668 kernel: audit: type=1327 audit(1765889320.045:920): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:40.079000 audit[6354]: USER_START pid=6354 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.142460 kernel: audit: type=1105 audit(1765889320.079:921): pid=6354 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.080000 audit[6357]: CRED_ACQ pid=6357 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.158309 kernel: audit: type=1103 audit(1765889320.080:922): pid=6357 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.315408 sshd[6357]: Connection closed by 10.200.16.10 port 35390 Dec 16 12:48:40.317277 sshd-session[6354]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:40.317000 audit[6354]: USER_END pid=6354 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.321379 systemd-logind[2108]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:48:40.323479 systemd[1]: sshd@23-10.200.20.49:22-10.200.16.10:35390.service: Deactivated successfully. Dec 16 12:48:40.328170 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:48:40.331225 systemd-logind[2108]: Removed session 26. Dec 16 12:48:40.317000 audit[6354]: CRED_DISP pid=6354 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.357119 kernel: audit: type=1106 audit(1765889320.317:923): pid=6354 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.357261 kernel: audit: type=1104 audit(1765889320.317:924): pid=6354 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:40.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.49:22-10.200.16.10:35390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:43.565392 kubelet[3656]: E1216 12:48:43.565074 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c8c58856c-hs58v" podUID="1c9e39e8-3a67-4975-af12-07644724165b" Dec 16 12:48:44.565597 kubelet[3656]: E1216 12:48:44.565547 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-vpb8q" podUID="f4be582d-98bf-4dca-8981-8263274550a3" Dec 16 12:48:44.565999 kubelet[3656]: E1216 12:48:44.565846 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8447d995cc-x57ls" podUID="c9372ebb-481a-480c-8bf1-ba7918503e79" Dec 16 12:48:45.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.49:22-10.200.16.10:56652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:45.409546 systemd[1]: Started sshd@24-10.200.20.49:22-10.200.16.10:56652.service - OpenSSH per-connection server daemon (10.200.16.10:56652). Dec 16 12:48:45.412837 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:48:45.412933 kernel: audit: type=1130 audit(1765889325.408:926): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.49:22-10.200.16.10:56652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:45.856000 audit[6369]: USER_ACCT pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:45.873019 sshd[6369]: Accepted publickey for core from 10.200.16.10 port 56652 ssh2: RSA SHA256:EQywgtLEgQeM5AlE8u6bSLW5d7rgTODhxBgGmPiEalo Dec 16 12:48:45.874521 sshd-session[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:48:45.873000 audit[6369]: CRED_ACQ pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:45.890114 kernel: audit: type=1101 audit(1765889325.856:927): pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:45.890251 kernel: audit: type=1103 audit(1765889325.873:928): pid=6369 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:45.899731 kernel: audit: type=1006 audit(1765889325.873:929): pid=6369 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 12:48:45.873000 audit[6369]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffad8af80 a2=3 a3=0 items=0 ppid=1 pid=6369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:45.918353 kernel: audit: type=1300 audit(1765889325.873:929): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffad8af80 a2=3 a3=0 items=0 ppid=1 pid=6369 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:48:45.920156 kernel: audit: type=1327 audit(1765889325.873:929): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:45.873000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:48:45.927609 systemd-logind[2108]: New session 27 of user core. Dec 16 12:48:45.935283 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:48:45.936000 audit[6369]: USER_START pid=6369 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:45.954000 audit[6372]: CRED_ACQ pid=6372 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:45.969496 kernel: audit: type=1105 audit(1765889325.936:930): pid=6369 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:45.969625 kernel: audit: type=1103 audit(1765889325.954:931): pid=6372 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:46.177211 sshd[6372]: Connection closed by 10.200.16.10 port 56652 Dec 16 12:48:46.178125 sshd-session[6369]: pam_unix(sshd:session): session closed for user core Dec 16 12:48:46.178000 audit[6369]: USER_END pid=6369 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:46.179000 audit[6369]: CRED_DISP pid=6369 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:46.199526 systemd[1]: sshd@24-10.200.20.49:22-10.200.16.10:56652.service: Deactivated successfully. Dec 16 12:48:46.204043 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:48:46.208578 systemd-logind[2108]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:48:46.210112 systemd-logind[2108]: Removed session 27. Dec 16 12:48:46.215241 kernel: audit: type=1106 audit(1765889326.178:932): pid=6369 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:46.215321 kernel: audit: type=1104 audit(1765889326.179:933): pid=6369 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 12:48:46.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.49:22-10.200.16.10:56652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:48:46.567203 kubelet[3656]: E1216 12:48:46.566386 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-knbcd" podUID="50d00cc7-1203-4290-806c-1437385334b5" Dec 16 12:48:48.567373 kubelet[3656]: E1216 12:48:48.567302 3656 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fcd869d9b-bwx78" podUID="7003c08f-2a9c-4fb5-8691-d2bf3d7c9d21"