Dec 12 17:21:47.309143 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:21:47.309166 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:21:47.309176 kernel: KASLR enabled Dec 12 17:21:47.309182 kernel: efi: EFI v2.7 by EDK II Dec 12 17:21:47.309188 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 12 17:21:47.309194 kernel: random: crng init done Dec 12 17:21:47.309201 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 12 17:21:47.309207 kernel: secureboot: Secure boot enabled Dec 12 17:21:47.309215 kernel: ACPI: Early table checksum verification disabled Dec 12 17:21:47.309221 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 12 17:21:47.309227 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:21:47.309233 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309239 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309246 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309255 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309261 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309267 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309274 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309281 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309287 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:21:47.309293 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:21:47.309300 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:21:47.309308 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:21:47.309314 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 12 17:21:47.309320 kernel: Zone ranges: Dec 12 17:21:47.309327 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:21:47.309333 kernel: DMA32 empty Dec 12 17:21:47.309339 kernel: Normal empty Dec 12 17:21:47.309345 kernel: Device empty Dec 12 17:21:47.309352 kernel: Movable zone start for each node Dec 12 17:21:47.309358 kernel: Early memory node ranges Dec 12 17:21:47.309364 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 12 17:21:47.309371 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 12 17:21:47.309378 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 12 17:21:47.309386 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 12 17:21:47.309392 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 12 17:21:47.309399 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 12 17:21:47.309406 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 12 17:21:47.309413 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 12 17:21:47.309419 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:21:47.309430 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:21:47.309436 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:21:47.309443 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 12 17:21:47.309460 kernel: psci: probing for conduit method from ACPI. Dec 12 17:21:47.309467 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:21:47.309474 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:21:47.309481 kernel: psci: Trusted OS migration not required Dec 12 17:21:47.309487 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:21:47.309497 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:21:47.309504 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:21:47.309511 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:21:47.309525 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:21:47.309533 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:21:47.309540 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:21:47.309546 kernel: CPU features: detected: Spectre-v4 Dec 12 17:21:47.309553 kernel: CPU features: detected: Spectre-BHB Dec 12 17:21:47.309560 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:21:47.309567 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:21:47.309574 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:21:47.309583 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:21:47.309590 kernel: alternatives: applying boot alternatives Dec 12 17:21:47.309598 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:21:47.309605 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:21:47.309612 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:21:47.309619 kernel: Fallback order for Node 0: 0 Dec 12 17:21:47.309625 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:21:47.309632 kernel: Policy zone: DMA Dec 12 17:21:47.309640 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:21:47.309646 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:21:47.309654 kernel: software IO TLB: area num 4. Dec 12 17:21:47.309661 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:21:47.309668 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 12 17:21:47.309675 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:21:47.309682 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:21:47.309690 kernel: rcu: RCU event tracing is enabled. Dec 12 17:21:47.309697 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:21:47.309704 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:21:47.309711 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:21:47.309718 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:21:47.309725 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:21:47.309732 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:21:47.309740 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:21:47.309747 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:21:47.309754 kernel: GICv3: 256 SPIs implemented Dec 12 17:21:47.309761 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:21:47.309768 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:21:47.309775 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:21:47.309782 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:21:47.309789 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:21:47.309796 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:21:47.309803 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:21:47.309810 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:21:47.309818 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:21:47.309825 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:21:47.309832 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:21:47.309839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:21:47.309846 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:21:47.309853 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:21:47.309860 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:21:47.309867 kernel: arm-pv: using stolen time PV Dec 12 17:21:47.309875 kernel: Console: colour dummy device 80x25 Dec 12 17:21:47.309884 kernel: ACPI: Core revision 20240827 Dec 12 17:21:47.309891 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:21:47.309912 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:21:47.309920 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:21:47.309927 kernel: landlock: Up and running. Dec 12 17:21:47.309934 kernel: SELinux: Initializing. Dec 12 17:21:47.309941 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:21:47.309948 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:21:47.309956 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:21:47.309964 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:21:47.309971 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:21:47.309979 kernel: Remapping and enabling EFI services. Dec 12 17:21:47.309986 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:21:47.309993 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:21:47.310001 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:21:47.310009 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:21:47.310017 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:21:47.310029 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:21:47.310045 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:21:47.310054 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:21:47.310061 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:21:47.310068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:21:47.310076 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:21:47.310083 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:21:47.310093 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:21:47.310101 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:21:47.310109 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:21:47.310116 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:21:47.310124 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:21:47.310133 kernel: SMP: Total of 4 processors activated. Dec 12 17:21:47.310141 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:21:47.310148 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:21:47.310156 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:21:47.310163 kernel: CPU features: detected: Common not Private translations Dec 12 17:21:47.310171 kernel: CPU features: detected: CRC32 instructions Dec 12 17:21:47.310179 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:21:47.310188 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:21:47.310196 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:21:47.310203 kernel: CPU features: detected: Privileged Access Never Dec 12 17:21:47.310211 kernel: CPU features: detected: RAS Extension Support Dec 12 17:21:47.310219 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:21:47.310226 kernel: alternatives: applying system-wide alternatives Dec 12 17:21:47.310234 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:21:47.310242 kernel: Memory: 2448804K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 101148K reserved, 16384K cma-reserved) Dec 12 17:21:47.310251 kernel: devtmpfs: initialized Dec 12 17:21:47.310259 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:21:47.310267 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:21:47.310274 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:21:47.310282 kernel: 0 pages in range for non-PLT usage Dec 12 17:21:47.310289 kernel: 515184 pages in range for PLT usage Dec 12 17:21:47.310296 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:21:47.310306 kernel: SMBIOS 3.0.0 present. Dec 12 17:21:47.310313 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:21:47.310321 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:21:47.310329 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:21:47.310337 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:21:47.310344 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:21:47.310352 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:21:47.310361 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:21:47.310369 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Dec 12 17:21:47.310376 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:21:47.310384 kernel: cpuidle: using governor menu Dec 12 17:21:47.310391 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:21:47.310399 kernel: ASID allocator initialised with 32768 entries Dec 12 17:21:47.310407 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:21:47.310415 kernel: Serial: AMBA PL011 UART driver Dec 12 17:21:47.310423 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:21:47.310431 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:21:47.310438 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:21:47.310446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:21:47.310460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:21:47.310467 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:21:47.310474 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:21:47.310484 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:21:47.310491 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:21:47.310499 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:21:47.310507 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:21:47.310514 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:21:47.310522 kernel: ACPI: Interpreter enabled Dec 12 17:21:47.310529 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:21:47.310538 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:21:47.310546 kernel: ACPI: CPU0 has been hot-added Dec 12 17:21:47.310553 kernel: ACPI: CPU1 has been hot-added Dec 12 17:21:47.310561 kernel: ACPI: CPU2 has been hot-added Dec 12 17:21:47.310568 kernel: ACPI: CPU3 has been hot-added Dec 12 17:21:47.310576 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:21:47.310583 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:21:47.310591 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:21:47.310765 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:21:47.310855 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:21:47.310937 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:21:47.311021 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:21:47.311145 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:21:47.311161 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:21:47.311169 kernel: PCI host bridge to bus 0000:00 Dec 12 17:21:47.311263 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:21:47.311346 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:21:47.311421 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:21:47.311511 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:21:47.311622 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:21:47.311715 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:21:47.311805 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:21:47.311889 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:21:47.311972 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:21:47.312088 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:21:47.312200 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:21:47.312288 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:21:47.312369 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:21:47.312454 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:21:47.312537 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:21:47.312552 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:21:47.312561 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:21:47.312569 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:21:47.312577 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:21:47.312585 kernel: iommu: Default domain type: Translated Dec 12 17:21:47.312593 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:21:47.312600 kernel: efivars: Registered efivars operations Dec 12 17:21:47.312610 kernel: vgaarb: loaded Dec 12 17:21:47.312617 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:21:47.312625 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:21:47.312633 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:21:47.312641 kernel: pnp: PnP ACPI init Dec 12 17:21:47.312741 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:21:47.312752 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:21:47.312762 kernel: NET: Registered PF_INET protocol family Dec 12 17:21:47.312770 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:21:47.312778 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:21:47.312786 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:21:47.312793 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:21:47.312801 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:21:47.312808 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:21:47.312818 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:21:47.312826 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:21:47.312833 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:21:47.312841 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:21:47.312849 kernel: kvm [1]: HYP mode not available Dec 12 17:21:47.312856 kernel: Initialise system trusted keyrings Dec 12 17:21:47.312864 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:21:47.312873 kernel: Key type asymmetric registered Dec 12 17:21:47.312880 kernel: Asymmetric key parser 'x509' registered Dec 12 17:21:47.312888 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:21:47.312896 kernel: io scheduler mq-deadline registered Dec 12 17:21:47.312903 kernel: io scheduler kyber registered Dec 12 17:21:47.312911 kernel: io scheduler bfq registered Dec 12 17:21:47.312919 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:21:47.312928 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:21:47.312936 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:21:47.313030 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:21:47.313049 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:21:47.313056 kernel: thunder_xcv, ver 1.0 Dec 12 17:21:47.313064 kernel: thunder_bgx, ver 1.0 Dec 12 17:21:47.313071 kernel: nicpf, ver 1.0 Dec 12 17:21:47.313081 kernel: nicvf, ver 1.0 Dec 12 17:21:47.313190 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:21:47.313269 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:21:46 UTC (1765560106) Dec 12 17:21:47.313280 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:21:47.313287 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:21:47.313295 kernel: watchdog: NMI not fully supported Dec 12 17:21:47.313303 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:21:47.313314 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:21:47.313322 kernel: Segment Routing with IPv6 Dec 12 17:21:47.313330 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:21:47.313337 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:21:47.313345 kernel: Key type dns_resolver registered Dec 12 17:21:47.313353 kernel: registered taskstats version 1 Dec 12 17:21:47.313366 kernel: Loading compiled-in X.509 certificates Dec 12 17:21:47.313376 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:21:47.313384 kernel: Demotion targets for Node 0: null Dec 12 17:21:47.313391 kernel: Key type .fscrypt registered Dec 12 17:21:47.313399 kernel: Key type fscrypt-provisioning registered Dec 12 17:21:47.313406 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:21:47.313414 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:21:47.313422 kernel: ima: No architecture policies found Dec 12 17:21:47.313432 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:21:47.313440 kernel: clk: Disabling unused clocks Dec 12 17:21:47.313456 kernel: PM: genpd: Disabling unused power domains Dec 12 17:21:47.313464 kernel: Freeing unused kernel memory: 12416K Dec 12 17:21:47.313473 kernel: Run /init as init process Dec 12 17:21:47.313480 kernel: with arguments: Dec 12 17:21:47.313488 kernel: /init Dec 12 17:21:47.313498 kernel: with environment: Dec 12 17:21:47.313506 kernel: HOME=/ Dec 12 17:21:47.313513 kernel: TERM=linux Dec 12 17:21:47.313618 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:21:47.313701 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 12 17:21:47.313711 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:21:47.313721 kernel: GPT:16515071 != 27000831 Dec 12 17:21:47.313729 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:21:47.313736 kernel: GPT:16515071 != 27000831 Dec 12 17:21:47.313743 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:21:47.313751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:21:47.313758 kernel: SCSI subsystem initialized Dec 12 17:21:47.313766 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:21:47.313776 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:21:47.313784 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:21:47.313792 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:21:47.313800 kernel: raid6: neonx8 gen() 15726 MB/s Dec 12 17:21:47.313808 kernel: raid6: neonx4 gen() 13218 MB/s Dec 12 17:21:47.313815 kernel: raid6: neonx2 gen() 8090 MB/s Dec 12 17:21:47.313823 kernel: raid6: neonx1 gen() 9916 MB/s Dec 12 17:21:47.313850 kernel: raid6: int64x8 gen() 6827 MB/s Dec 12 17:21:47.313859 kernel: raid6: int64x4 gen() 7312 MB/s Dec 12 17:21:47.313867 kernel: raid6: int64x2 gen() 5742 MB/s Dec 12 17:21:47.313875 kernel: raid6: int64x1 gen() 5050 MB/s Dec 12 17:21:47.313882 kernel: raid6: using algorithm neonx8 gen() 15726 MB/s Dec 12 17:21:47.313890 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Dec 12 17:21:47.313897 kernel: raid6: using neon recovery algorithm Dec 12 17:21:47.313905 kernel: xor: measuring software checksum speed Dec 12 17:21:47.313915 kernel: 8regs : 21596 MB/sec Dec 12 17:21:47.313923 kernel: 32regs : 21687 MB/sec Dec 12 17:21:47.313931 kernel: arm64_neon : 28128 MB/sec Dec 12 17:21:47.313939 kernel: xor: using function: arm64_neon (28128 MB/sec) Dec 12 17:21:47.313947 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:21:47.313955 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (206) Dec 12 17:21:47.313963 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:21:47.313972 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:21:47.313980 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:21:47.313988 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:21:47.313996 kernel: loop: module loaded Dec 12 17:21:47.314003 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:21:47.314011 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:21:47.314020 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:21:47.314045 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:21:47.314055 systemd[1]: Detected virtualization kvm. Dec 12 17:21:47.314063 systemd[1]: Detected architecture arm64. Dec 12 17:21:47.314071 systemd[1]: Running in initrd. Dec 12 17:21:47.314082 systemd[1]: No hostname configured, using default hostname. Dec 12 17:21:47.314097 systemd[1]: Hostname set to . Dec 12 17:21:47.314106 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:21:47.314127 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:21:47.314136 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:21:47.314144 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:21:47.314153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:21:47.314162 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:21:47.314172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:21:47.314181 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:21:47.314190 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:21:47.314198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:21:47.314207 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:21:47.314217 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:21:47.314226 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:21:47.314234 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:21:47.314242 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:21:47.314251 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:21:47.314259 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:21:47.314267 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:21:47.314277 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:21:47.314286 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:21:47.314294 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:21:47.314303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:21:47.314311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:21:47.314328 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:21:47.314338 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:21:47.314347 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:21:47.314356 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:21:47.314364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:21:47.314373 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:21:47.314382 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:21:47.314392 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:21:47.314400 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:21:47.314409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:21:47.314418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:21:47.314428 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:21:47.314437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:21:47.314446 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:21:47.314461 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:21:47.314470 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:21:47.314505 systemd-journald[346]: Collecting audit messages is enabled. Dec 12 17:21:47.314528 kernel: Bridge firewalling registered Dec 12 17:21:47.314537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:21:47.314547 kernel: audit: type=1130 audit(1765560107.313:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.314558 systemd-journald[346]: Journal started Dec 12 17:21:47.314649 systemd-journald[346]: Runtime Journal (/run/log/journal/6e4f9304c1bc4a5bbbf5a6e035a0724a) is 6M, max 48.5M, 42.4M free. Dec 12 17:21:47.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.308933 systemd-modules-load[347]: Inserted module 'br_netfilter' Dec 12 17:21:47.320849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:21:47.320879 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:21:47.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.324085 kernel: audit: type=1130 audit(1765560107.321:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.324262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:21:47.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.328132 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:21:47.332609 kernel: audit: type=1130 audit(1765560107.325:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.332637 kernel: audit: type=1130 audit(1765560107.329:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.332094 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:21:47.336015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:21:47.350926 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:21:47.354182 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:21:47.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.358000 audit: BPF prog-id=6 op=LOAD Dec 12 17:21:47.359683 kernel: audit: type=1130 audit(1765560107.355:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.359730 kernel: audit: type=1334 audit(1765560107.358:7): prog-id=6 op=LOAD Dec 12 17:21:47.359521 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:21:47.361409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:21:47.364142 systemd-tmpfiles[372]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:21:47.369481 kernel: audit: type=1130 audit(1765560107.366:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.376500 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:21:47.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.381140 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:21:47.385613 kernel: audit: type=1130 audit(1765560107.377:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.385640 kernel: audit: type=1130 audit(1765560107.382:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.384022 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:21:47.409227 dracut-cmdline[393]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:21:47.419675 systemd-resolved[378]: Positive Trust Anchors: Dec 12 17:21:47.419697 systemd-resolved[378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:21:47.419700 systemd-resolved[378]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:21:47.419731 systemd-resolved[378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:21:47.443152 systemd-resolved[378]: Defaulting to hostname 'linux'. Dec 12 17:21:47.444142 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:21:47.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.445505 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:21:47.497063 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:21:47.506069 kernel: iscsi: registered transport (tcp) Dec 12 17:21:47.520063 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:21:47.520086 kernel: QLogic iSCSI HBA Driver Dec 12 17:21:47.542376 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:21:47.569152 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:21:47.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.570668 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:21:47.625966 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:21:47.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.629558 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:21:47.631406 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:21:47.672166 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:21:47.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.676000 audit: BPF prog-id=7 op=LOAD Dec 12 17:21:47.676000 audit: BPF prog-id=8 op=LOAD Dec 12 17:21:47.677485 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:21:47.710774 systemd-udevd[634]: Using default interface naming scheme 'v257'. Dec 12 17:21:47.719060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:21:47.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.724093 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:21:47.749431 dracut-pre-trigger[680]: rd.md=0: removing MD RAID activation Dec 12 17:21:47.775618 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:21:47.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.778000 audit: BPF prog-id=9 op=LOAD Dec 12 17:21:47.778796 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:21:47.779848 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:21:47.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.782936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:21:47.827529 systemd-networkd[764]: lo: Link UP Dec 12 17:21:47.827538 systemd-networkd[764]: lo: Gained carrier Dec 12 17:21:47.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.828992 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:21:47.830084 systemd[1]: Reached target network.target - Network. Dec 12 17:21:47.847233 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:21:47.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.852423 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:21:47.899774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:21:47.913788 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:21:47.930050 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:21:47.936448 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:21:47.938342 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:21:47.949417 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:21:47.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.949555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:21:47.950580 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:21:47.954755 systemd-networkd[764]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:21:47.954769 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:21:47.956083 systemd-networkd[764]: eth0: Link UP Dec 12 17:21:47.956245 systemd-networkd[764]: eth0: Gained carrier Dec 12 17:21:47.956258 systemd-networkd[764]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:21:47.959089 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:21:47.965625 disk-uuid[807]: Primary Header is updated. Dec 12 17:21:47.965625 disk-uuid[807]: Secondary Entries is updated. Dec 12 17:21:47.965625 disk-uuid[807]: Secondary Header is updated. Dec 12 17:21:47.974708 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:21:47.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:47.975792 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:21:47.978330 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:21:47.980793 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:21:47.982844 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:21:47.991205 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:21:48.000183 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:21:48.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:48.017152 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:21:48.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.000789 disk-uuid[808]: Warning: The kernel is still using the old partition table. Dec 12 17:21:49.000789 disk-uuid[808]: The new table will be used at the next reboot or after you Dec 12 17:21:49.000789 disk-uuid[808]: run partprobe(8) or kpartx(8) Dec 12 17:21:49.000789 disk-uuid[808]: The operation has completed successfully. Dec 12 17:21:49.011244 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:21:49.011379 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:21:49.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.015128 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:21:49.049062 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Dec 12 17:21:49.051063 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:21:49.051114 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:21:49.054217 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:21:49.054271 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:21:49.060042 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:21:49.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.060703 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:21:49.063130 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:21:49.150170 systemd-networkd[764]: eth0: Gained IPv6LL Dec 12 17:21:49.177163 ignition[857]: Ignition 2.22.0 Dec 12 17:21:49.177176 ignition[857]: Stage: fetch-offline Dec 12 17:21:49.177224 ignition[857]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:21:49.177233 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:21:49.177397 ignition[857]: parsed url from cmdline: "" Dec 12 17:21:49.177400 ignition[857]: no config URL provided Dec 12 17:21:49.177404 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:21:49.177414 ignition[857]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:21:49.177460 ignition[857]: op(1): [started] loading QEMU firmware config module Dec 12 17:21:49.177464 ignition[857]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:21:49.183256 ignition[857]: op(1): [finished] loading QEMU firmware config module Dec 12 17:21:49.183280 ignition[857]: QEMU firmware config was not found. Ignoring... Dec 12 17:21:49.226824 ignition[857]: parsing config with SHA512: c13f5ba8ff44903461728e2d9cf5251475fa543f4ffdadda27ba458f358a67b17d846129d976b0ba09ad8f257577f1b1e4f3c17f37662869c4e0f2ca427ac3da Dec 12 17:21:49.231791 unknown[857]: fetched base config from "system" Dec 12 17:21:49.231803 unknown[857]: fetched user config from "qemu" Dec 12 17:21:49.232371 ignition[857]: fetch-offline: fetch-offline passed Dec 12 17:21:49.232447 ignition[857]: Ignition finished successfully Dec 12 17:21:49.234227 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:21:49.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.235595 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:21:49.236511 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:21:49.287744 ignition[874]: Ignition 2.22.0 Dec 12 17:21:49.288547 ignition[874]: Stage: kargs Dec 12 17:21:49.288737 ignition[874]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:21:49.288746 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:21:49.289633 ignition[874]: kargs: kargs passed Dec 12 17:21:49.289684 ignition[874]: Ignition finished successfully Dec 12 17:21:49.292639 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:21:49.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.294950 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:21:49.339709 ignition[882]: Ignition 2.22.0 Dec 12 17:21:49.339728 ignition[882]: Stage: disks Dec 12 17:21:49.339883 ignition[882]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:21:49.339892 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:21:49.340721 ignition[882]: disks: disks passed Dec 12 17:21:49.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.342902 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:21:49.340771 ignition[882]: Ignition finished successfully Dec 12 17:21:49.344448 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:21:49.345478 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:21:49.346905 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:21:49.348194 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:21:49.349671 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:21:49.352309 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:21:49.397726 systemd-fsck[892]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 12 17:21:49.403101 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:21:49.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.405280 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:21:49.482053 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:21:49.482276 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:21:49.483474 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:21:49.486490 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:21:49.488197 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:21:49.489095 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:21:49.489144 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:21:49.489179 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:21:49.507992 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:21:49.510648 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:21:49.512883 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (900) Dec 12 17:21:49.515050 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:21:49.515077 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:21:49.517112 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:21:49.517155 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:21:49.518244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:21:49.551148 initrd-setup-root[924]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:21:49.554569 initrd-setup-root[931]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:21:49.559104 initrd-setup-root[938]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:21:49.563532 initrd-setup-root[945]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:21:49.655624 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:21:49.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.661397 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:21:49.677900 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:21:49.684595 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:21:49.686014 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:21:49.705099 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:21:49.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.719815 ignition[1013]: INFO : Ignition 2.22.0 Dec 12 17:21:49.719815 ignition[1013]: INFO : Stage: mount Dec 12 17:21:49.721242 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:21:49.721242 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:21:49.721242 ignition[1013]: INFO : mount: mount passed Dec 12 17:21:49.721242 ignition[1013]: INFO : Ignition finished successfully Dec 12 17:21:49.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:49.723933 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:21:49.725961 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:21:50.483991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:21:50.516084 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Dec 12 17:21:50.519999 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:21:50.520067 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:21:50.536831 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:21:50.536891 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:21:50.538326 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:21:50.574764 ignition[1044]: INFO : Ignition 2.22.0 Dec 12 17:21:50.574764 ignition[1044]: INFO : Stage: files Dec 12 17:21:50.576140 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:21:50.576140 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:21:50.576140 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:21:50.578864 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:21:50.578864 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:21:50.581064 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:21:50.581064 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:21:50.583164 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:21:50.583164 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:21:50.583164 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 12 17:21:50.581282 unknown[1044]: wrote ssh authorized keys file for user: core Dec 12 17:21:50.643978 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:21:50.797889 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:21:50.797889 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:21:50.800879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:21:50.800879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:21:50.800879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:21:50.800879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:21:50.800879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:21:50.800879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:21:50.800879 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:21:50.824348 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:21:50.826671 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:21:50.826671 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:21:50.831059 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:21:50.831059 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:21:50.831059 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 12 17:21:51.143153 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:21:51.340836 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:21:51.340836 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:21:51.344175 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:21:51.345695 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:21:51.345695 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:21:51.345695 ignition[1044]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:21:51.345695 ignition[1044]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:21:51.345695 ignition[1044]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:21:51.345695 ignition[1044]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:21:51.345695 ignition[1044]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:21:51.362072 ignition[1044]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:21:51.367681 ignition[1044]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:21:51.370260 ignition[1044]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:21:51.370260 ignition[1044]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:21:51.370260 ignition[1044]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:21:51.370260 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:21:51.370260 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:21:51.370260 ignition[1044]: INFO : files: files passed Dec 12 17:21:51.370260 ignition[1044]: INFO : Ignition finished successfully Dec 12 17:21:51.382398 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 12 17:21:51.382428 kernel: audit: type=1130 audit(1765560111.372:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.371108 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:21:51.373537 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:21:51.377574 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:21:51.391225 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:21:51.391366 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:21:51.399343 kernel: audit: type=1130 audit(1765560111.392:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.399388 kernel: audit: type=1131 audit(1765560111.392:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.399487 initrd-setup-root-after-ignition[1075]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:21:51.401246 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:21:51.401246 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:21:51.404362 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:21:51.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.403601 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:21:51.412040 kernel: audit: type=1130 audit(1765560111.403:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.405584 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:21:51.411989 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:21:51.479796 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:21:51.479924 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:21:51.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.482059 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:21:51.490066 kernel: audit: type=1130 audit(1765560111.481:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.490096 kernel: audit: type=1131 audit(1765560111.481:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.488766 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:21:51.491099 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:21:51.492176 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:21:51.529051 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:21:51.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.531555 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:21:51.534758 kernel: audit: type=1130 audit(1765560111.530:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.558908 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:21:51.559059 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:21:51.560995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:21:51.562738 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:21:51.564257 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:21:51.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.564403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:21:51.569975 kernel: audit: type=1131 audit(1765560111.565:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.568415 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:21:51.570844 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:21:51.572319 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:21:51.573826 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:21:51.575571 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:21:51.578571 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:21:51.580241 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:21:51.581802 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:21:51.583787 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:21:51.585423 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:21:51.586867 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:21:51.588236 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:21:51.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.588383 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:21:51.594272 kernel: audit: type=1131 audit(1765560111.590:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.593580 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:21:51.595211 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:21:51.597967 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:21:51.598098 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:21:51.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.599928 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:21:51.605222 kernel: audit: type=1131 audit(1765560111.601:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.600091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:21:51.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.604477 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:21:51.604636 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:21:51.606293 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:21:51.607566 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:21:51.607680 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:21:51.609272 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:21:51.610791 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:21:51.612100 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:21:51.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.612204 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:21:51.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.613756 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:21:51.613845 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:21:51.615656 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:21:51.615732 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:21:51.617213 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:21:51.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.617339 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:21:51.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.618876 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:21:51.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.618985 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:21:51.621205 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:21:51.623184 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:21:51.624950 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:21:51.625114 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:21:51.626785 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:21:51.626907 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:21:51.628661 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:21:51.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.628767 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:21:51.634997 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:21:51.637074 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:21:51.646065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:21:51.652103 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:21:51.654101 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:21:51.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.660701 ignition[1101]: INFO : Ignition 2.22.0 Dec 12 17:21:51.660701 ignition[1101]: INFO : Stage: umount Dec 12 17:21:51.662070 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:21:51.662070 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:21:51.662070 ignition[1101]: INFO : umount: umount passed Dec 12 17:21:51.662070 ignition[1101]: INFO : Ignition finished successfully Dec 12 17:21:51.664466 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:21:51.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.664642 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:21:51.666684 systemd[1]: Stopped target network.target - Network. Dec 12 17:21:51.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.667722 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:21:51.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.667798 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:21:51.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.669368 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:21:51.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.669423 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:21:51.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.670449 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:21:51.670504 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:21:51.671848 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:21:51.671893 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:21:51.673342 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:21:51.673395 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:21:51.675122 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:21:51.677298 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:21:51.688062 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:21:51.688197 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:21:51.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.691492 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:21:51.691599 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:21:51.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.696243 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:21:51.697152 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:21:51.697197 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:21:51.699731 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:21:51.700879 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:21:51.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.703000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:21:51.703000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:21:51.700953 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:21:51.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.702898 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:21:51.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.702960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:21:51.704565 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:21:51.704615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:21:51.706620 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:21:51.719075 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:21:51.719271 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:21:51.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.721260 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:21:51.721306 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:21:51.722947 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:21:51.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.722985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:21:51.724411 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:21:51.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.724480 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:21:51.726817 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:21:51.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.726878 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:21:51.729158 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:21:51.729217 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:21:51.732471 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:21:51.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.733904 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:21:51.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.733972 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:21:51.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.735900 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:21:51.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.735951 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:21:51.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.737591 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:21:51.737636 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:21:51.739117 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:21:51.739162 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:21:51.741177 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:21:51.741234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:21:51.758781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:21:51.758919 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:21:51.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.760844 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:21:51.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:51.760940 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:21:51.762690 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:21:51.764615 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:21:51.775478 systemd[1]: Switching root. Dec 12 17:21:51.804804 systemd-journald[346]: Journal stopped Dec 12 17:21:52.669186 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Dec 12 17:21:52.669242 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:21:52.669258 kernel: SELinux: policy capability open_perms=1 Dec 12 17:21:52.669269 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:21:52.669282 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:21:52.669293 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:21:52.669310 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:21:52.669324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:21:52.669342 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:21:52.669356 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:21:52.669367 systemd[1]: Successfully loaded SELinux policy in 64.758ms. Dec 12 17:21:52.669389 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.896ms. Dec 12 17:21:52.669401 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:21:52.669415 systemd[1]: Detected virtualization kvm. Dec 12 17:21:52.669438 systemd[1]: Detected architecture arm64. Dec 12 17:21:52.669451 systemd[1]: Detected first boot. Dec 12 17:21:52.669462 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:21:52.669473 zram_generator::config[1145]: No configuration found. Dec 12 17:21:52.669491 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:21:52.669503 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:21:52.669514 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:21:52.669525 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:21:52.669536 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:21:52.669548 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:21:52.669560 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:21:52.669575 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:21:52.669587 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:21:52.669599 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:21:52.669610 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:21:52.669622 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:21:52.669633 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:21:52.669644 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:21:52.669660 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:21:52.669671 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:21:52.669683 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:21:52.669695 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:21:52.669706 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:21:52.669718 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:21:52.669729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:21:52.669743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:21:52.669754 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:21:52.669766 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:21:52.669777 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:21:52.669789 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:21:52.669800 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:21:52.669813 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:21:52.669825 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:21:52.669836 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:21:52.669848 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:21:52.669860 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:21:52.669871 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:21:52.669883 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:21:52.669896 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:21:52.669907 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:21:52.669918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:21:52.669930 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:21:52.669941 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:21:52.669952 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:21:52.669964 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:21:52.669979 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:21:52.669991 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:21:52.670002 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:21:52.670014 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:21:52.670025 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:21:52.670106 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:21:52.670120 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:21:52.670134 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:21:52.670146 systemd[1]: Reached target machines.target - Containers. Dec 12 17:21:52.670157 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:21:52.670169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:21:52.670180 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:21:52.670191 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:21:52.670203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:21:52.670217 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:21:52.670228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:21:52.670239 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:21:52.670250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:21:52.670329 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:21:52.670346 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:21:52.670359 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:21:52.670375 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:21:52.670387 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:21:52.670400 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:21:52.670414 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:21:52.670436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:21:52.670449 kernel: ACPI: bus type drm_connector registered Dec 12 17:21:52.670461 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:21:52.670473 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:21:52.670485 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:21:52.670496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:21:52.670507 kernel: fuse: init (API version 7.41) Dec 12 17:21:52.670521 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:21:52.670533 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:21:52.670546 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:21:52.670557 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:21:52.670569 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:21:52.670581 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:21:52.670640 systemd-journald[1214]: Collecting audit messages is enabled. Dec 12 17:21:52.670670 systemd-journald[1214]: Journal started Dec 12 17:21:52.670694 systemd-journald[1214]: Runtime Journal (/run/log/journal/6e4f9304c1bc4a5bbbf5a6e035a0724a) is 6M, max 48.5M, 42.4M free. Dec 12 17:21:52.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.620000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:21:52.620000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:21:52.621000 audit: BPF prog-id=15 op=LOAD Dec 12 17:21:52.621000 audit: BPF prog-id=16 op=LOAD Dec 12 17:21:52.621000 audit: BPF prog-id=17 op=LOAD Dec 12 17:21:52.667000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:21:52.667000 audit[1214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffffae58f40 a2=4000 a3=0 items=0 ppid=1 pid=1214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:52.667000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:21:52.420536 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:21:52.444669 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:21:52.445225 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:21:52.672791 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:21:52.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.674769 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:21:52.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.676174 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:21:52.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.677564 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:21:52.677731 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:21:52.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.679123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:21:52.679286 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:21:52.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.681545 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:21:52.681730 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:21:52.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.682977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:21:52.683178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:21:52.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.684485 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:21:52.684653 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:21:52.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.685995 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:21:52.686325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:21:52.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.687755 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:21:52.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.689194 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:21:52.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.691323 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:21:52.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.692928 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:21:52.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.706215 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:21:52.707725 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:21:52.710098 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:21:52.712095 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:21:52.712994 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:21:52.713028 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:21:52.714759 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:21:52.716246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:21:52.716365 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:21:52.723231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:21:52.725346 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:21:52.726365 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:21:52.727726 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:21:52.728889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:21:52.730175 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:21:52.738158 systemd-journald[1214]: Time spent on flushing to /var/log/journal/6e4f9304c1bc4a5bbbf5a6e035a0724a is 14.453ms for 1004 entries. Dec 12 17:21:52.738158 systemd-journald[1214]: System Journal (/var/log/journal/6e4f9304c1bc4a5bbbf5a6e035a0724a) is 8M, max 163.5M, 155.5M free. Dec 12 17:21:52.763441 systemd-journald[1214]: Received client request to flush runtime journal. Dec 12 17:21:52.763496 kernel: loop1: detected capacity change from 0 to 100192 Dec 12 17:21:52.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.734657 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:21:52.736883 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:21:52.739103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:21:52.741524 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:21:52.742927 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:21:52.753281 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:21:52.754866 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:21:52.756733 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:21:52.759484 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:21:52.774411 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:21:52.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.778020 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Dec 12 17:21:52.778098 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Dec 12 17:21:52.782024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:21:52.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.785202 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:21:52.791051 kernel: loop2: detected capacity change from 0 to 109872 Dec 12 17:21:52.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.799287 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:21:52.817635 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:21:52.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.820057 kernel: loop3: detected capacity change from 0 to 207008 Dec 12 17:21:52.820000 audit: BPF prog-id=18 op=LOAD Dec 12 17:21:52.820000 audit: BPF prog-id=19 op=LOAD Dec 12 17:21:52.820000 audit: BPF prog-id=20 op=LOAD Dec 12 17:21:52.823000 audit: BPF prog-id=21 op=LOAD Dec 12 17:21:52.821241 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:21:52.826209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:21:52.828500 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:21:52.831000 audit: BPF prog-id=22 op=LOAD Dec 12 17:21:52.831000 audit: BPF prog-id=23 op=LOAD Dec 12 17:21:52.831000 audit: BPF prog-id=24 op=LOAD Dec 12 17:21:52.832172 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:21:52.834000 audit: BPF prog-id=25 op=LOAD Dec 12 17:21:52.846000 audit: BPF prog-id=26 op=LOAD Dec 12 17:21:52.846000 audit: BPF prog-id=27 op=LOAD Dec 12 17:21:52.848615 kernel: loop4: detected capacity change from 0 to 100192 Dec 12 17:21:52.849816 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:21:52.859070 kernel: loop5: detected capacity change from 0 to 109872 Dec 12 17:21:52.861885 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 12 17:21:52.862257 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Dec 12 17:21:52.867060 kernel: loop6: detected capacity change from 0 to 207008 Dec 12 17:21:52.867149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:21:52.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.874605 (sd-merge)[1288]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 12 17:21:52.877953 (sd-merge)[1288]: Merged extensions into '/usr'. Dec 12 17:21:52.880956 systemd-nsresourced[1287]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:21:52.884706 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:21:52.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:52.886086 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:21:52.886102 systemd[1]: Reloading... Dec 12 17:21:52.961060 zram_generator::config[1336]: No configuration found. Dec 12 17:21:52.971713 systemd-resolved[1284]: Positive Trust Anchors: Dec 12 17:21:52.971731 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:21:52.971735 systemd-resolved[1284]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:21:52.971767 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:21:52.973715 systemd-oomd[1283]: No swap; memory pressure usage will be degraded Dec 12 17:21:52.978309 systemd-resolved[1284]: Defaulting to hostname 'linux'. Dec 12 17:21:53.107452 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:21:53.108075 systemd[1]: Reloading finished in 221 ms. Dec 12 17:21:53.140908 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:21:53.142132 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:21:53.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.143218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:21:53.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.144469 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:21:53.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.150096 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:21:53.169531 systemd[1]: Starting ensure-sysext.service... Dec 12 17:21:53.173000 audit: BPF prog-id=28 op=LOAD Dec 12 17:21:53.173000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:21:53.173000 audit: BPF prog-id=29 op=LOAD Dec 12 17:21:53.174000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:21:53.174000 audit: BPF prog-id=30 op=LOAD Dec 12 17:21:53.174000 audit: BPF prog-id=31 op=LOAD Dec 12 17:21:53.174000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:21:53.174000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:21:53.171418 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:21:53.174000 audit: BPF prog-id=32 op=LOAD Dec 12 17:21:53.174000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:21:53.175000 audit: BPF prog-id=33 op=LOAD Dec 12 17:21:53.175000 audit: BPF prog-id=34 op=LOAD Dec 12 17:21:53.175000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:21:53.175000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:21:53.176000 audit: BPF prog-id=35 op=LOAD Dec 12 17:21:53.176000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:21:53.176000 audit: BPF prog-id=36 op=LOAD Dec 12 17:21:53.176000 audit: BPF prog-id=37 op=LOAD Dec 12 17:21:53.176000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:21:53.176000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:21:53.177000 audit: BPF prog-id=38 op=LOAD Dec 12 17:21:53.177000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:21:53.177000 audit: BPF prog-id=39 op=LOAD Dec 12 17:21:53.177000 audit: BPF prog-id=40 op=LOAD Dec 12 17:21:53.177000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:21:53.177000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:21:53.182961 systemd[1]: Reload requested from client PID 1366 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:21:53.182978 systemd[1]: Reloading... Dec 12 17:21:53.190177 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:21:53.190213 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:21:53.190469 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:21:53.191826 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Dec 12 17:21:53.191942 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Dec 12 17:21:53.196019 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:21:53.196139 systemd-tmpfiles[1367]: Skipping /boot Dec 12 17:21:53.202720 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:21:53.202842 systemd-tmpfiles[1367]: Skipping /boot Dec 12 17:21:53.239092 zram_generator::config[1399]: No configuration found. Dec 12 17:21:53.379589 systemd[1]: Reloading finished in 196 ms. Dec 12 17:21:53.403228 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:21:53.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.405000 audit: BPF prog-id=41 op=LOAD Dec 12 17:21:53.405000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:21:53.406000 audit: BPF prog-id=42 op=LOAD Dec 12 17:21:53.406000 audit: BPF prog-id=43 op=LOAD Dec 12 17:21:53.406000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:21:53.406000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:21:53.407000 audit: BPF prog-id=44 op=LOAD Dec 12 17:21:53.407000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:21:53.408000 audit: BPF prog-id=45 op=LOAD Dec 12 17:21:53.408000 audit: BPF prog-id=46 op=LOAD Dec 12 17:21:53.408000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:21:53.408000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:21:53.409000 audit: BPF prog-id=47 op=LOAD Dec 12 17:21:53.409000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:21:53.409000 audit: BPF prog-id=48 op=LOAD Dec 12 17:21:53.409000 audit: BPF prog-id=49 op=LOAD Dec 12 17:21:53.409000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:21:53.409000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:21:53.409000 audit: BPF prog-id=50 op=LOAD Dec 12 17:21:53.424000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:21:53.425000 audit: BPF prog-id=51 op=LOAD Dec 12 17:21:53.425000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:21:53.425000 audit: BPF prog-id=52 op=LOAD Dec 12 17:21:53.425000 audit: BPF prog-id=53 op=LOAD Dec 12 17:21:53.426000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:21:53.426000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:21:53.428534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:21:53.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.437811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:21:53.440762 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:21:53.453296 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:21:53.457370 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:21:53.458000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:21:53.458000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:21:53.459000 audit: BPF prog-id=54 op=LOAD Dec 12 17:21:53.459000 audit: BPF prog-id=55 op=LOAD Dec 12 17:21:53.460875 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:21:53.463812 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:21:53.471200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:21:53.472801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:21:53.478461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:21:53.478000 audit[1446]: SYSTEM_BOOT pid=1446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.481469 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:21:53.482741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:21:53.482966 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:21:53.483122 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:21:53.493102 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:21:53.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.496254 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:21:53.496683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:21:53.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.504348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:21:53.504697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:21:53.504968 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:21:53.506220 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:21:53.506386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:21:53.507871 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:21:53.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.511024 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:21:53.516474 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:21:53.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.522154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:21:53.522534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:21:53.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:53.524667 systemd-udevd[1440]: Using default interface naming scheme 'v257'. Dec 12 17:21:53.528584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:21:53.531410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:21:53.534291 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:21:53.537416 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:21:53.539000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:21:53.539000 audit[1471]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe13ed9a0 a2=420 a3=0 items=0 ppid=1435 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:53.539000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:21:53.540686 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:21:53.540979 augenrules[1471]: No rules Dec 12 17:21:53.541869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:21:53.542190 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:21:53.542321 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:21:53.543542 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:21:53.546096 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:21:53.548293 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:21:53.550532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:21:53.550767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:21:53.552715 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:21:53.552971 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:21:53.556650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:21:53.557679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:21:53.559534 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:21:53.561395 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:21:53.562263 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:21:53.567659 systemd[1]: Finished ensure-sysext.service. Dec 12 17:21:53.580498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:21:53.582237 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:21:53.582318 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:21:53.586617 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:21:53.589136 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:21:53.630014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:21:53.641777 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:21:53.651106 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:21:53.662651 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:21:53.698076 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:21:53.699828 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:21:53.701418 systemd-networkd[1502]: lo: Link UP Dec 12 17:21:53.701435 systemd-networkd[1502]: lo: Gained carrier Dec 12 17:21:53.702649 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:21:53.702658 systemd-networkd[1502]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:21:53.702749 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:21:53.703824 systemd-networkd[1502]: eth0: Link UP Dec 12 17:21:53.703987 systemd-networkd[1502]: eth0: Gained carrier Dec 12 17:21:53.704006 systemd-networkd[1502]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:21:53.704619 systemd[1]: Reached target network.target - Network. Dec 12 17:21:53.707754 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:21:53.710911 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:21:53.717237 systemd-networkd[1502]: eth0: DHCPv4 address 10.0.0.37/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:21:53.717927 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Dec 12 17:21:53.719072 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:21:53.719133 systemd-timesyncd[1504]: Initial clock synchronization to Fri 2025-12-12 17:21:53.382937 UTC. Dec 12 17:21:53.732114 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:21:53.811500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:21:53.834315 ldconfig[1437]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:21:53.841128 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:21:53.844948 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:21:53.865944 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:21:53.867561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:21:53.870809 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:21:53.872059 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:21:53.873031 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:21:53.874217 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:21:53.875173 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:21:53.876167 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:21:53.877254 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:21:53.878153 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:21:53.879333 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:21:53.879370 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:21:53.880108 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:21:53.883334 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:21:53.885957 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:21:53.889117 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:21:53.890366 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:21:53.891443 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:21:53.894683 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:21:53.896165 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:21:53.897963 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:21:53.899119 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:21:53.899910 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:21:53.900850 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:21:53.900900 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:21:53.902194 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:21:53.905828 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:21:53.908064 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:21:53.911612 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:21:53.916458 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:21:53.917461 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:21:53.919361 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:21:53.921841 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:21:53.925191 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:21:53.928367 jq[1552]: false Dec 12 17:21:53.928867 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:21:53.933224 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:21:53.934724 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:21:53.935294 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:21:53.936304 extend-filesystems[1553]: Found /dev/vda6 Dec 12 17:21:53.936761 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:21:53.941060 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:21:53.943766 extend-filesystems[1553]: Found /dev/vda9 Dec 12 17:21:53.949347 extend-filesystems[1553]: Checking size of /dev/vda9 Dec 12 17:21:53.949655 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:21:53.952975 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:21:53.953306 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:21:53.953633 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:21:53.953861 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:21:53.954176 jq[1571]: true Dec 12 17:21:53.956205 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:21:53.956449 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:21:53.976112 jq[1580]: true Dec 12 17:21:53.980678 tar[1578]: linux-arm64/LICENSE Dec 12 17:21:53.980678 tar[1578]: linux-arm64/helm Dec 12 17:21:53.987543 extend-filesystems[1553]: Resized partition /dev/vda9 Dec 12 17:21:53.989021 extend-filesystems[1601]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:21:53.998322 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 12 17:21:53.998388 update_engine[1564]: I20251212 17:21:53.996454 1564 main.cc:92] Flatcar Update Engine starting Dec 12 17:21:54.019450 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 12 17:21:54.019324 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:21:54.018849 dbus-daemon[1550]: [system] SELinux support is enabled Dec 12 17:21:54.023622 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:21:54.033374 update_engine[1564]: I20251212 17:21:54.032598 1564 update_check_scheduler.cc:74] Next update check in 4m24s Dec 12 17:21:54.023672 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:21:54.025587 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:21:54.025604 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:21:54.033055 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:21:54.034558 extend-filesystems[1601]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:21:54.034558 extend-filesystems[1601]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:21:54.034558 extend-filesystems[1601]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 12 17:21:54.047272 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Dec 12 17:21:54.048946 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:21:54.052445 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:21:54.053016 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:21:54.071796 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:21:54.072554 systemd-logind[1562]: New seat seat0. Dec 12 17:21:54.083832 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:21:54.084987 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:21:54.087043 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:21:54.090520 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:21:54.124850 containerd[1581]: time="2025-12-12T17:21:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:21:54.125652 containerd[1581]: time="2025-12-12T17:21:54.125609766Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:21:54.128222 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:21:54.144131 containerd[1581]: time="2025-12-12T17:21:54.144000489Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.458µs" Dec 12 17:21:54.144131 containerd[1581]: time="2025-12-12T17:21:54.144063410Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:21:54.144242 containerd[1581]: time="2025-12-12T17:21:54.144130125Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:21:54.144242 containerd[1581]: time="2025-12-12T17:21:54.144149477Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:21:54.144329 containerd[1581]: time="2025-12-12T17:21:54.144308351Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:21:54.144354 containerd[1581]: time="2025-12-12T17:21:54.144337206Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:21:54.144412 containerd[1581]: time="2025-12-12T17:21:54.144393727Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:21:54.144449 containerd[1581]: time="2025-12-12T17:21:54.144409324Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:21:54.144772 containerd[1581]: time="2025-12-12T17:21:54.144743549Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:21:54.144823 containerd[1581]: time="2025-12-12T17:21:54.144770986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:21:54.144823 containerd[1581]: time="2025-12-12T17:21:54.144798691Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:21:54.144823 containerd[1581]: time="2025-12-12T17:21:54.144811797Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:21:54.144991 containerd[1581]: time="2025-12-12T17:21:54.144970594Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:21:54.145016 containerd[1581]: time="2025-12-12T17:21:54.144996498Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:21:54.145131 containerd[1581]: time="2025-12-12T17:21:54.145113221Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:21:54.145327 containerd[1581]: time="2025-12-12T17:21:54.145303364Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:21:54.145367 containerd[1581]: time="2025-12-12T17:21:54.145348849Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:21:54.145391 containerd[1581]: time="2025-12-12T17:21:54.145365633Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:21:54.145665 containerd[1581]: time="2025-12-12T17:21:54.145424110Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:21:54.146445 containerd[1581]: time="2025-12-12T17:21:54.146294353Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:21:54.146553 containerd[1581]: time="2025-12-12T17:21:54.146532856Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:21:54.150661 containerd[1581]: time="2025-12-12T17:21:54.150550001Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:21:54.150661 containerd[1581]: time="2025-12-12T17:21:54.150622962Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:21:54.150758 containerd[1581]: time="2025-12-12T17:21:54.150745087Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:21:54.150778 containerd[1581]: time="2025-12-12T17:21:54.150762561Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:21:54.150795 containerd[1581]: time="2025-12-12T17:21:54.150778157Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:21:54.150795 containerd[1581]: time="2025-12-12T17:21:54.150792182Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:21:54.150839 containerd[1581]: time="2025-12-12T17:21:54.150804368Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:21:54.150839 containerd[1581]: time="2025-12-12T17:21:54.150826938Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:21:54.150894 containerd[1581]: time="2025-12-12T17:21:54.150839545Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:21:54.150894 containerd[1581]: time="2025-12-12T17:21:54.150861234Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:21:54.150894 containerd[1581]: time="2025-12-12T17:21:54.150872999Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:21:54.150894 containerd[1581]: time="2025-12-12T17:21:54.150883920Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:21:54.150959 containerd[1581]: time="2025-12-12T17:21:54.150894381Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:21:54.150959 containerd[1581]: time="2025-12-12T17:21:54.150907295Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151077588Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151106520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151122959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151133727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151144801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151155224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151167372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151179442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151198372Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151209332Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151219448Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151249184Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151288386Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151302794Z" level=info msg="Start snapshots syncer" Dec 12 17:21:54.153056 containerd[1581]: time="2025-12-12T17:21:54.151333258Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:21:54.153315 containerd[1581]: time="2025-12-12T17:21:54.151585862Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:21:54.153315 containerd[1581]: time="2025-12-12T17:21:54.151634797Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.151820303Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.151922541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.151942965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.151954231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.151964424Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.151976878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.151988949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.152008032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.152023858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.152061527Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.152091378Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.152105326Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:21:54.153413 containerd[1581]: time="2025-12-12T17:21:54.152114255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152123758Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152132687Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152143301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152153303Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152186488Z" level=info msg="runtime interface created" Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152191661Z" level=info msg="created NRI interface" Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152200129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152213158Z" level=info msg="Connect containerd service" Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152234732Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:21:54.153654 containerd[1581]: time="2025-12-12T17:21:54.152939166Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:21:54.227664 containerd[1581]: time="2025-12-12T17:21:54.227590919Z" level=info msg="Start subscribing containerd event" Dec 12 17:21:54.227772 containerd[1581]: time="2025-12-12T17:21:54.227676066Z" level=info msg="Start recovering state" Dec 12 17:21:54.227808 containerd[1581]: time="2025-12-12T17:21:54.227771061Z" level=info msg="Start event monitor" Dec 12 17:21:54.227808 containerd[1581]: time="2025-12-12T17:21:54.227791983Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:21:54.227808 containerd[1581]: time="2025-12-12T17:21:54.227799647Z" level=info msg="Start streaming server" Dec 12 17:21:54.227808 containerd[1581]: time="2025-12-12T17:21:54.227808001Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:21:54.227885 containerd[1581]: time="2025-12-12T17:21:54.227815243Z" level=info msg="runtime interface starting up..." Dec 12 17:21:54.227885 containerd[1581]: time="2025-12-12T17:21:54.227821030Z" level=info msg="starting plugins..." Dec 12 17:21:54.227885 containerd[1581]: time="2025-12-12T17:21:54.227834212Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:21:54.228016 containerd[1581]: time="2025-12-12T17:21:54.227984885Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:21:54.228070 containerd[1581]: time="2025-12-12T17:21:54.228056083Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:21:54.228129 containerd[1581]: time="2025-12-12T17:21:54.228112452Z" level=info msg="containerd successfully booted in 0.103641s" Dec 12 17:21:54.228305 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:21:54.309630 tar[1578]: linux-arm64/README.md Dec 12 17:21:54.336186 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:21:54.959283 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:21:54.978565 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:21:54.981829 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:21:55.007815 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:21:55.008146 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:21:55.011401 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:21:55.039145 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:21:55.041703 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:21:55.043668 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:21:55.044795 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:21:55.102184 systemd-networkd[1502]: eth0: Gained IPv6LL Dec 12 17:21:55.104743 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:21:55.106323 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:21:55.108640 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:21:55.110982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:21:55.113146 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:21:55.139684 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:21:55.146908 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:21:55.147235 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:21:55.148945 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:21:55.669471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:21:55.670855 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:21:55.674557 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:21:55.677189 systemd[1]: Startup finished in 1.463s (kernel) + 4.940s (initrd) + 3.738s (userspace) = 10.142s. Dec 12 17:21:56.011576 kubelet[1689]: E1212 17:21:56.011451 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:21:56.013620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:21:56.013759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:21:56.014263 systemd[1]: kubelet.service: Consumed 747ms CPU time, 257M memory peak. Dec 12 17:21:58.091519 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:21:58.093561 systemd[1]: Started sshd@0-10.0.0.37:22-10.0.0.1:47252.service - OpenSSH per-connection server daemon (10.0.0.1:47252). Dec 12 17:21:58.174206 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 47252 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:21:58.176109 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:58.182675 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:21:58.183732 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:21:58.188111 systemd-logind[1562]: New session 1 of user core. Dec 12 17:21:58.209354 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:21:58.212053 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:21:58.227459 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:21:58.230083 systemd-logind[1562]: New session c1 of user core. Dec 12 17:21:58.344320 systemd[1707]: Queued start job for default target default.target. Dec 12 17:21:58.368055 systemd[1707]: Created slice app.slice - User Application Slice. Dec 12 17:21:58.368089 systemd[1707]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:21:58.368102 systemd[1707]: Reached target paths.target - Paths. Dec 12 17:21:58.368154 systemd[1707]: Reached target timers.target - Timers. Dec 12 17:21:58.369317 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:21:58.370025 systemd[1707]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:21:58.379207 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:21:58.379281 systemd[1707]: Reached target sockets.target - Sockets. Dec 12 17:21:58.381433 systemd[1707]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:21:58.381567 systemd[1707]: Reached target basic.target - Basic System. Dec 12 17:21:58.381622 systemd[1707]: Reached target default.target - Main User Target. Dec 12 17:21:58.381652 systemd[1707]: Startup finished in 144ms. Dec 12 17:21:58.381786 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:21:58.394266 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:21:58.404359 systemd[1]: Started sshd@1-10.0.0.37:22-10.0.0.1:47256.service - OpenSSH per-connection server daemon (10.0.0.1:47256). Dec 12 17:21:58.468364 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 47256 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:21:58.469652 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:58.474120 systemd-logind[1562]: New session 2 of user core. Dec 12 17:21:58.484231 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:21:58.495162 sshd[1723]: Connection closed by 10.0.0.1 port 47256 Dec 12 17:21:58.495586 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Dec 12 17:21:58.507877 systemd[1]: sshd@1-10.0.0.37:22-10.0.0.1:47256.service: Deactivated successfully. Dec 12 17:21:58.509649 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:21:58.512440 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:21:58.514997 systemd[1]: Started sshd@2-10.0.0.37:22-10.0.0.1:47268.service - OpenSSH per-connection server daemon (10.0.0.1:47268). Dec 12 17:21:58.515533 systemd-logind[1562]: Removed session 2. Dec 12 17:21:58.578854 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 47268 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:21:58.580152 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:58.584378 systemd-logind[1562]: New session 3 of user core. Dec 12 17:21:58.599264 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:21:58.606164 sshd[1732]: Connection closed by 10.0.0.1 port 47268 Dec 12 17:21:58.606575 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Dec 12 17:21:58.622944 systemd[1]: sshd@2-10.0.0.37:22-10.0.0.1:47268.service: Deactivated successfully. Dec 12 17:21:58.624613 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:21:58.625477 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:21:58.629112 systemd[1]: Started sshd@3-10.0.0.37:22-10.0.0.1:47282.service - OpenSSH per-connection server daemon (10.0.0.1:47282). Dec 12 17:21:58.629694 systemd-logind[1562]: Removed session 3. Dec 12 17:21:58.693875 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 47282 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:21:58.695349 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:58.700029 systemd-logind[1562]: New session 4 of user core. Dec 12 17:21:58.713236 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:21:58.725045 sshd[1741]: Connection closed by 10.0.0.1 port 47282 Dec 12 17:21:58.725526 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Dec 12 17:21:58.737264 systemd[1]: sshd@3-10.0.0.37:22-10.0.0.1:47282.service: Deactivated successfully. Dec 12 17:21:58.738979 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:21:58.739698 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:21:58.742241 systemd[1]: Started sshd@4-10.0.0.37:22-10.0.0.1:47292.service - OpenSSH per-connection server daemon (10.0.0.1:47292). Dec 12 17:21:58.742782 systemd-logind[1562]: Removed session 4. Dec 12 17:21:58.805115 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 47292 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:21:58.806317 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:58.811092 systemd-logind[1562]: New session 5 of user core. Dec 12 17:21:58.817229 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:21:58.833396 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:21:58.833674 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:21:58.843807 sudo[1751]: pam_unix(sudo:session): session closed for user root Dec 12 17:21:58.845413 sshd[1750]: Connection closed by 10.0.0.1 port 47292 Dec 12 17:21:58.845907 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Dec 12 17:21:58.857906 systemd[1]: sshd@4-10.0.0.37:22-10.0.0.1:47292.service: Deactivated successfully. Dec 12 17:21:58.860685 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:21:58.861505 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:21:58.864099 systemd[1]: Started sshd@5-10.0.0.37:22-10.0.0.1:47308.service - OpenSSH per-connection server daemon (10.0.0.1:47308). Dec 12 17:21:58.864621 systemd-logind[1562]: Removed session 5. Dec 12 17:21:58.927522 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 47308 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:21:58.928832 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:58.933118 systemd-logind[1562]: New session 6 of user core. Dec 12 17:21:58.941228 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:21:58.952340 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:21:58.952594 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:21:58.958251 sudo[1762]: pam_unix(sudo:session): session closed for user root Dec 12 17:21:58.964708 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:21:58.964977 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:21:58.973904 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:21:59.014000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:21:59.015747 augenrules[1784]: No rules Dec 12 17:21:59.016152 kernel: kauditd_printk_skb: 173 callbacks suppressed Dec 12 17:21:59.016183 kernel: audit: type=1305 audit(1765560119.014:216): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:21:59.017079 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:21:59.017367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:21:59.014000 audit[1784]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc637770 a2=420 a3=0 items=0 ppid=1765 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.020677 kernel: audit: type=1300 audit(1765560119.014:216): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc637770 a2=420 a3=0 items=0 ppid=1765 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.014000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:21:59.021368 sudo[1761]: pam_unix(sudo:session): session closed for user root Dec 12 17:21:59.022192 kernel: audit: type=1327 audit(1765560119.014:216): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:21:59.022223 kernel: audit: type=1130 audit(1765560119.016:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.023383 sshd[1760]: Connection closed by 10.0.0.1 port 47308 Dec 12 17:21:59.023298 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Dec 12 17:21:59.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.026678 kernel: audit: type=1131 audit(1765560119.016:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.026714 kernel: audit: type=1106 audit(1765560119.021:219): pid=1761 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.021000 audit[1761]: USER_END pid=1761 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.028978 kernel: audit: type=1104 audit(1765560119.021:220): pid=1761 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.021000 audit[1761]: CRED_DISP pid=1761 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.023000 audit[1757]: USER_END pid=1757 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:59.034370 kernel: audit: type=1106 audit(1765560119.023:221): pid=1757 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:59.034401 kernel: audit: type=1104 audit(1765560119.023:222): pid=1757 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:59.023000 audit[1757]: CRED_DISP pid=1757 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:59.045294 systemd[1]: sshd@5-10.0.0.37:22-10.0.0.1:47308.service: Deactivated successfully. Dec 12 17:21:59.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.37:22-10.0.0.1:47308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.047135 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:21:59.047911 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:21:59.048049 kernel: audit: type=1131 audit(1765560119.044:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.37:22-10.0.0.1:47308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.050553 systemd[1]: Started sshd@6-10.0.0.37:22-10.0.0.1:47320.service - OpenSSH per-connection server daemon (10.0.0.1:47320). Dec 12 17:21:59.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.37:22-10.0.0.1:47320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.051266 systemd-logind[1562]: Removed session 6. Dec 12 17:21:59.114000 audit[1793]: USER_ACCT pid=1793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:59.115727 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 47320 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:21:59.115000 audit[1793]: CRED_ACQ pid=1793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:59.115000 audit[1793]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff5448180 a2=3 a3=0 items=0 ppid=1 pid=1793 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.115000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:21:59.117148 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:21:59.121414 systemd-logind[1562]: New session 7 of user core. Dec 12 17:21:59.132248 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:21:59.133000 audit[1793]: USER_START pid=1793 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:59.134000 audit[1796]: CRED_ACQ pid=1796 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:21:59.141000 audit[1797]: USER_ACCT pid=1797 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.142914 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:21:59.142000 audit[1797]: CRED_REFR pid=1797 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.143588 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:21:59.144000 audit[1797]: USER_START pid=1797 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:21:59.413180 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:21:59.427355 (dockerd)[1817]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:21:59.620491 dockerd[1817]: time="2025-12-12T17:21:59.620416067Z" level=info msg="Starting up" Dec 12 17:21:59.621805 dockerd[1817]: time="2025-12-12T17:21:59.621761444Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:21:59.633511 dockerd[1817]: time="2025-12-12T17:21:59.633434107Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:21:59.699478 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2277552628-merged.mount: Deactivated successfully. Dec 12 17:21:59.817559 dockerd[1817]: time="2025-12-12T17:21:59.817514152Z" level=info msg="Loading containers: start." Dec 12 17:21:59.827057 kernel: Initializing XFRM netlink socket Dec 12 17:21:59.875000 audit[1872]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.875000 audit[1872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd03b7060 a2=0 a3=0 items=0 ppid=1817 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:21:59.877000 audit[1874]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.877000 audit[1874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffce4c2a70 a2=0 a3=0 items=0 ppid=1817 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.877000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:21:59.879000 audit[1876]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.879000 audit[1876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd6dacb0 a2=0 a3=0 items=0 ppid=1817 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:21:59.882000 audit[1878]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.882000 audit[1878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb75e8d0 a2=0 a3=0 items=0 ppid=1817 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.882000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:21:59.884000 audit[1880]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.884000 audit[1880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffde0bb2c0 a2=0 a3=0 items=0 ppid=1817 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:21:59.886000 audit[1882]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.886000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffff8fa450 a2=0 a3=0 items=0 ppid=1817 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:21:59.888000 audit[1884]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.888000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffecd4d1a0 a2=0 a3=0 items=0 ppid=1817 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:21:59.891000 audit[1886]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.891000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff14df750 a2=0 a3=0 items=0 ppid=1817 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.891000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:21:59.912000 audit[1889]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.912000 audit[1889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc955f300 a2=0 a3=0 items=0 ppid=1817 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.912000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:21:59.914000 audit[1891]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.914000 audit[1891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffda1f9e30 a2=0 a3=0 items=0 ppid=1817 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:21:59.916000 audit[1893]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.916000 audit[1893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc8021180 a2=0 a3=0 items=0 ppid=1817 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.916000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:21:59.918000 audit[1895]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.918000 audit[1895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff4cc9300 a2=0 a3=0 items=0 ppid=1817 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:21:59.921000 audit[1897]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.921000 audit[1897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffd6a6bb0 a2=0 a3=0 items=0 ppid=1817 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:21:59.956000 audit[1927]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.956000 audit[1927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe0d1d670 a2=0 a3=0 items=0 ppid=1817 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.956000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:21:59.958000 audit[1929]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.958000 audit[1929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc1494750 a2=0 a3=0 items=0 ppid=1817 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:21:59.960000 audit[1931]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.960000 audit[1931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec2cca80 a2=0 a3=0 items=0 ppid=1817 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:21:59.962000 audit[1933]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.962000 audit[1933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc56e4e80 a2=0 a3=0 items=0 ppid=1817 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:21:59.964000 audit[1935]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.964000 audit[1935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffb235fa0 a2=0 a3=0 items=0 ppid=1817 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:21:59.966000 audit[1937]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.966000 audit[1937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc1c889e0 a2=0 a3=0 items=0 ppid=1817 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:21:59.968000 audit[1939]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.968000 audit[1939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff264ed60 a2=0 a3=0 items=0 ppid=1817 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:21:59.970000 audit[1941]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.970000 audit[1941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff99dc8a0 a2=0 a3=0 items=0 ppid=1817 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:21:59.974000 audit[1943]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.974000 audit[1943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffe0dc9840 a2=0 a3=0 items=0 ppid=1817 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.974000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:21:59.975000 audit[1945]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.975000 audit[1945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc71f4030 a2=0 a3=0 items=0 ppid=1817 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:21:59.977000 audit[1947]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1947 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.977000 audit[1947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff2814a20 a2=0 a3=0 items=0 ppid=1817 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:21:59.979000 audit[1949]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.979000 audit[1949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff084da40 a2=0 a3=0 items=0 ppid=1817 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:21:59.981000 audit[1951]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.981000 audit[1951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc0fa3d40 a2=0 a3=0 items=0 ppid=1817 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:21:59.986000 audit[1956]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.986000 audit[1956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc1ac2630 a2=0 a3=0 items=0 ppid=1817 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:21:59.989000 audit[1958]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.989000 audit[1958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe8c99fe0 a2=0 a3=0 items=0 ppid=1817 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.989000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:21:59.991000 audit[1960]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:21:59.991000 audit[1960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd4a4f240 a2=0 a3=0 items=0 ppid=1817 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:21:59.993000 audit[1962]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.993000 audit[1962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc96e56c0 a2=0 a3=0 items=0 ppid=1817 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.993000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:21:59.995000 audit[1964]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.995000 audit[1964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff30c7850 a2=0 a3=0 items=0 ppid=1817 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:21:59.997000 audit[1966]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:21:59.997000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd1d00dd0 a2=0 a3=0 items=0 ppid=1817 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:21:59.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:22:00.013000 audit[1970]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:00.013000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffe7fd4df0 a2=0 a3=0 items=0 ppid=1817 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:00.013000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:22:00.015000 audit[1972]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:00.015000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff95d97e0 a2=0 a3=0 items=0 ppid=1817 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:00.015000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:22:00.024000 audit[1980]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:00.024000 audit[1980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff8bc9e30 a2=0 a3=0 items=0 ppid=1817 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:00.024000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:22:00.033000 audit[1986]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:00.033000 audit[1986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd516e0d0 a2=0 a3=0 items=0 ppid=1817 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:00.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:22:00.036000 audit[1988]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:00.036000 audit[1988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd5e8aec0 a2=0 a3=0 items=0 ppid=1817 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:00.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:22:00.038000 audit[1990]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:00.038000 audit[1990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe3c70800 a2=0 a3=0 items=0 ppid=1817 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:00.038000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:22:00.040000 audit[1992]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:00.040000 audit[1992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffef1441d0 a2=0 a3=0 items=0 ppid=1817 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:00.040000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:22:00.042000 audit[1994]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:00.042000 audit[1994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcd5657c0 a2=0 a3=0 items=0 ppid=1817 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:00.042000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:22:00.043811 systemd-networkd[1502]: docker0: Link UP Dec 12 17:22:00.047956 dockerd[1817]: time="2025-12-12T17:22:00.047895052Z" level=info msg="Loading containers: done." Dec 12 17:22:00.067028 dockerd[1817]: time="2025-12-12T17:22:00.066956927Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:22:00.067198 dockerd[1817]: time="2025-12-12T17:22:00.067079571Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:22:00.067263 dockerd[1817]: time="2025-12-12T17:22:00.067244168Z" level=info msg="Initializing buildkit" Dec 12 17:22:00.090941 dockerd[1817]: time="2025-12-12T17:22:00.090869524Z" level=info msg="Completed buildkit initialization" Dec 12 17:22:00.097128 dockerd[1817]: time="2025-12-12T17:22:00.097063202Z" level=info msg="Daemon has completed initialization" Dec 12 17:22:00.097450 dockerd[1817]: time="2025-12-12T17:22:00.097308450Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:22:00.097536 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:22:00.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:00.597091 containerd[1581]: time="2025-12-12T17:22:00.597046991Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 17:22:00.697475 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2782597139-merged.mount: Deactivated successfully. Dec 12 17:22:01.231275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414281115.mount: Deactivated successfully. Dec 12 17:22:01.968959 containerd[1581]: time="2025-12-12T17:22:01.968872460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:01.970061 containerd[1581]: time="2025-12-12T17:22:01.969956047Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=24835766" Dec 12 17:22:01.971312 containerd[1581]: time="2025-12-12T17:22:01.971286926Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:01.974301 containerd[1581]: time="2025-12-12T17:22:01.974259629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:01.976023 containerd[1581]: time="2025-12-12T17:22:01.975974390Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.378882085s" Dec 12 17:22:01.976085 containerd[1581]: time="2025-12-12T17:22:01.976023605Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 12 17:22:01.976764 containerd[1581]: time="2025-12-12T17:22:01.976616265Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 17:22:03.168395 containerd[1581]: time="2025-12-12T17:22:03.168326617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:03.170701 containerd[1581]: time="2025-12-12T17:22:03.170651323Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 12 17:22:03.172507 containerd[1581]: time="2025-12-12T17:22:03.172475787Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:03.176510 containerd[1581]: time="2025-12-12T17:22:03.176428392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:03.177762 containerd[1581]: time="2025-12-12T17:22:03.177706761Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.201036938s" Dec 12 17:22:03.177762 containerd[1581]: time="2025-12-12T17:22:03.177759289Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 12 17:22:03.178264 containerd[1581]: time="2025-12-12T17:22:03.178243694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 17:22:04.371556 containerd[1581]: time="2025-12-12T17:22:04.371497685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:04.374485 containerd[1581]: time="2025-12-12T17:22:04.374229521Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17610300" Dec 12 17:22:04.375705 containerd[1581]: time="2025-12-12T17:22:04.375675998Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:04.379483 containerd[1581]: time="2025-12-12T17:22:04.379442828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:04.381232 containerd[1581]: time="2025-12-12T17:22:04.381013043Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.202739082s" Dec 12 17:22:04.381232 containerd[1581]: time="2025-12-12T17:22:04.381070758Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 12 17:22:04.381512 containerd[1581]: time="2025-12-12T17:22:04.381482794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 17:22:05.412858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190967733.mount: Deactivated successfully. Dec 12 17:22:05.743064 containerd[1581]: time="2025-12-12T17:22:05.742786333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:05.743990 containerd[1581]: time="2025-12-12T17:22:05.743861634Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27558078" Dec 12 17:22:05.745086 containerd[1581]: time="2025-12-12T17:22:05.745011844Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:05.747598 containerd[1581]: time="2025-12-12T17:22:05.747541150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:05.748306 containerd[1581]: time="2025-12-12T17:22:05.748124336Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.36654655s" Dec 12 17:22:05.748306 containerd[1581]: time="2025-12-12T17:22:05.748157096Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 12 17:22:05.748615 containerd[1581]: time="2025-12-12T17:22:05.748589435Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 17:22:06.259369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:22:06.260877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:22:06.266690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4087173666.mount: Deactivated successfully. Dec 12 17:22:06.432631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:22:06.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:06.433698 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 12 17:22:06.433776 kernel: audit: type=1130 audit(1765560126.431:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:06.452345 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:22:06.495589 kubelet[2128]: E1212 17:22:06.495528 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:22:06.498528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:22:06.498650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:22:06.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:22:06.499254 systemd[1]: kubelet.service: Consumed 165ms CPU time, 106.9M memory peak. Dec 12 17:22:06.502074 kernel: audit: type=1131 audit(1765560126.498:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:22:07.083768 containerd[1581]: time="2025-12-12T17:22:07.083709167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:07.090609 containerd[1581]: time="2025-12-12T17:22:07.090525213Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956379" Dec 12 17:22:07.112088 containerd[1581]: time="2025-12-12T17:22:07.111974206Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:07.118373 containerd[1581]: time="2025-12-12T17:22:07.118308286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:07.120114 containerd[1581]: time="2025-12-12T17:22:07.119928840Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.371305792s" Dec 12 17:22:07.120114 containerd[1581]: time="2025-12-12T17:22:07.119974261Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 12 17:22:07.120603 containerd[1581]: time="2025-12-12T17:22:07.120577403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:22:07.992193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409005517.mount: Deactivated successfully. Dec 12 17:22:07.998873 containerd[1581]: time="2025-12-12T17:22:07.998817791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:22:07.999654 containerd[1581]: time="2025-12-12T17:22:07.999578716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=2405" Dec 12 17:22:08.000546 containerd[1581]: time="2025-12-12T17:22:08.000488054Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:22:08.002996 containerd[1581]: time="2025-12-12T17:22:08.002681522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:22:08.003844 containerd[1581]: time="2025-12-12T17:22:08.003814654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 883.203257ms" Dec 12 17:22:08.003906 containerd[1581]: time="2025-12-12T17:22:08.003842075Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:22:08.004298 containerd[1581]: time="2025-12-12T17:22:08.004275887Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 17:22:08.564908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956232677.mount: Deactivated successfully. Dec 12 17:22:10.390067 containerd[1581]: time="2025-12-12T17:22:10.389604049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:10.392108 containerd[1581]: time="2025-12-12T17:22:10.392046356Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Dec 12 17:22:10.395018 containerd[1581]: time="2025-12-12T17:22:10.394962104Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:10.399062 containerd[1581]: time="2025-12-12T17:22:10.399003243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:10.401279 containerd[1581]: time="2025-12-12T17:22:10.401249766Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.396885363s" Dec 12 17:22:10.401391 containerd[1581]: time="2025-12-12T17:22:10.401370246Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 12 17:22:13.871762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:22:13.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:13.871934 systemd[1]: kubelet.service: Consumed 165ms CPU time, 106.9M memory peak. Dec 12 17:22:13.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:13.874158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:22:13.876309 kernel: audit: type=1130 audit(1765560133.870:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:13.876382 kernel: audit: type=1131 audit(1765560133.870:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:13.899980 systemd[1]: Reload requested from client PID 2269 ('systemctl') (unit session-7.scope)... Dec 12 17:22:13.900000 systemd[1]: Reloading... Dec 12 17:22:13.977148 zram_generator::config[2320]: No configuration found. Dec 12 17:22:14.154564 systemd[1]: Reloading finished in 254 ms. Dec 12 17:22:14.181000 audit: BPF prog-id=61 op=LOAD Dec 12 17:22:14.181000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:22:14.184076 kernel: audit: type=1334 audit(1765560134.181:278): prog-id=61 op=LOAD Dec 12 17:22:14.184163 kernel: audit: type=1334 audit(1765560134.181:279): prog-id=57 op=UNLOAD Dec 12 17:22:14.184185 kernel: audit: type=1334 audit(1765560134.181:280): prog-id=62 op=LOAD Dec 12 17:22:14.184203 kernel: audit: type=1334 audit(1765560134.181:281): prog-id=56 op=UNLOAD Dec 12 17:22:14.181000 audit: BPF prog-id=62 op=LOAD Dec 12 17:22:14.181000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:22:14.182000 audit: BPF prog-id=63 op=LOAD Dec 12 17:22:14.182000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:22:14.183000 audit: BPF prog-id=64 op=LOAD Dec 12 17:22:14.183000 audit: BPF prog-id=65 op=LOAD Dec 12 17:22:14.185320 kernel: audit: type=1334 audit(1765560134.182:282): prog-id=63 op=LOAD Dec 12 17:22:14.185354 kernel: audit: type=1334 audit(1765560134.182:283): prog-id=44 op=UNLOAD Dec 12 17:22:14.185370 kernel: audit: type=1334 audit(1765560134.183:284): prog-id=64 op=LOAD Dec 12 17:22:14.185385 kernel: audit: type=1334 audit(1765560134.183:285): prog-id=65 op=LOAD Dec 12 17:22:14.183000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:22:14.183000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:22:14.184000 audit: BPF prog-id=66 op=LOAD Dec 12 17:22:14.184000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:22:14.185000 audit: BPF prog-id=67 op=LOAD Dec 12 17:22:14.191000 audit: BPF prog-id=68 op=LOAD Dec 12 17:22:14.191000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:22:14.191000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:22:14.193000 audit: BPF prog-id=69 op=LOAD Dec 12 17:22:14.193000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:22:14.193000 audit: BPF prog-id=70 op=LOAD Dec 12 17:22:14.193000 audit: BPF prog-id=71 op=LOAD Dec 12 17:22:14.193000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:22:14.193000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:22:14.194000 audit: BPF prog-id=72 op=LOAD Dec 12 17:22:14.194000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:22:14.194000 audit: BPF prog-id=73 op=LOAD Dec 12 17:22:14.194000 audit: BPF prog-id=74 op=LOAD Dec 12 17:22:14.194000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:22:14.194000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:22:14.195000 audit: BPF prog-id=75 op=LOAD Dec 12 17:22:14.195000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:22:14.195000 audit: BPF prog-id=76 op=LOAD Dec 12 17:22:14.195000 audit: BPF prog-id=77 op=LOAD Dec 12 17:22:14.195000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:22:14.195000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:22:14.195000 audit: BPF prog-id=78 op=LOAD Dec 12 17:22:14.195000 audit: BPF prog-id=79 op=LOAD Dec 12 17:22:14.195000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:22:14.195000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:22:14.196000 audit: BPF prog-id=80 op=LOAD Dec 12 17:22:14.196000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:22:14.221662 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:22:14.221751 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:22:14.222233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:22:14.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:22:14.222296 systemd[1]: kubelet.service: Consumed 97ms CPU time, 95.3M memory peak. Dec 12 17:22:14.223908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:22:14.354900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:22:14.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:14.373375 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:22:14.410723 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:22:14.410723 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:22:14.410723 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:22:14.410723 kubelet[2359]: I1212 17:22:14.410708 2359 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:22:15.825091 kubelet[2359]: I1212 17:22:15.825045 2359 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:22:15.825509 kubelet[2359]: I1212 17:22:15.825490 2359 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:22:15.825923 kubelet[2359]: I1212 17:22:15.825902 2359 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:22:15.848163 kubelet[2359]: I1212 17:22:15.848120 2359 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:22:15.850677 kubelet[2359]: E1212 17:22:15.850630 2359 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:22:15.855674 kubelet[2359]: I1212 17:22:15.855643 2359 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:22:15.859371 kubelet[2359]: I1212 17:22:15.859340 2359 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:22:15.860349 kubelet[2359]: I1212 17:22:15.860286 2359 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:22:15.860642 kubelet[2359]: I1212 17:22:15.860440 2359 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:22:15.860912 kubelet[2359]: I1212 17:22:15.860896 2359 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:22:15.860965 kubelet[2359]: I1212 17:22:15.860957 2359 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:22:15.861230 kubelet[2359]: I1212 17:22:15.861216 2359 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:22:15.864808 kubelet[2359]: I1212 17:22:15.864782 2359 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:22:15.865092 kubelet[2359]: I1212 17:22:15.865076 2359 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:22:15.865195 kubelet[2359]: I1212 17:22:15.865185 2359 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:22:15.865254 kubelet[2359]: I1212 17:22:15.865245 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:22:15.867594 kubelet[2359]: W1212 17:22:15.867525 2359 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Dec 12 17:22:15.867701 kubelet[2359]: E1212 17:22:15.867600 2359 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:22:15.868772 kubelet[2359]: W1212 17:22:15.868676 2359 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Dec 12 17:22:15.868772 kubelet[2359]: E1212 17:22:15.868743 2359 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:22:15.870134 kubelet[2359]: I1212 17:22:15.870091 2359 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:22:15.870746 kubelet[2359]: I1212 17:22:15.870730 2359 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:22:15.870874 kubelet[2359]: W1212 17:22:15.870860 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:22:15.871981 kubelet[2359]: I1212 17:22:15.871958 2359 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:22:15.872070 kubelet[2359]: I1212 17:22:15.872004 2359 server.go:1287] "Started kubelet" Dec 12 17:22:15.872486 kubelet[2359]: I1212 17:22:15.872452 2359 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:22:15.873481 kubelet[2359]: I1212 17:22:15.873461 2359 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:22:15.873638 kubelet[2359]: I1212 17:22:15.873572 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:22:15.874024 kubelet[2359]: I1212 17:22:15.873998 2359 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:22:15.875870 kubelet[2359]: E1212 17:22:15.875495 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.37:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.37:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18808796d83d5b33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:22:15.871978291 +0000 UTC m=+1.495387176,LastTimestamp:2025-12-12 17:22:15.871978291 +0000 UTC m=+1.495387176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:22:15.876688 kubelet[2359]: I1212 17:22:15.876664 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:22:15.877796 kubelet[2359]: I1212 17:22:15.877760 2359 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:22:15.878617 kubelet[2359]: E1212 17:22:15.878223 2359 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:22:15.878617 kubelet[2359]: I1212 17:22:15.878270 2359 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:22:15.878617 kubelet[2359]: I1212 17:22:15.878476 2359 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:22:15.878617 kubelet[2359]: I1212 17:22:15.878544 2359 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:22:15.879046 kubelet[2359]: W1212 17:22:15.878996 2359 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Dec 12 17:22:15.879092 kubelet[2359]: E1212 17:22:15.879059 2359 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:22:15.878000 audit[2372]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:15.878000 audit[2372]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc2059aa0 a2=0 a3=0 items=0 ppid=2359 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:22:15.880180 kubelet[2359]: I1212 17:22:15.879836 2359 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:22:15.880180 kubelet[2359]: I1212 17:22:15.879956 2359 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:22:15.880000 audit[2373]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:15.880000 audit[2373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffca9580 a2=0 a3=0 items=0 ppid=2359 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:22:15.881714 kubelet[2359]: I1212 17:22:15.881484 2359 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:22:15.881714 kubelet[2359]: E1212 17:22:15.881607 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="200ms" Dec 12 17:22:15.881761 kubelet[2359]: E1212 17:22:15.881747 2359 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:22:15.882000 audit[2375]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:15.882000 audit[2375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcfb51e00 a2=0 a3=0 items=0 ppid=2359 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:22:15.884000 audit[2377]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:15.884000 audit[2377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe24ddec0 a2=0 a3=0 items=0 ppid=2359 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:22:15.895619 kubelet[2359]: I1212 17:22:15.895589 2359 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:22:15.895619 kubelet[2359]: I1212 17:22:15.895610 2359 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:22:15.895761 kubelet[2359]: I1212 17:22:15.895632 2359 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:22:15.894000 audit[2386]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2386 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:15.894000 audit[2386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcb359eb0 a2=0 a3=0 items=0 ppid=2359 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.894000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 12 17:22:15.896208 kubelet[2359]: I1212 17:22:15.896072 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:22:15.895000 audit[2387]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:15.895000 audit[2387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffb499bd0 a2=0 a3=0 items=0 ppid=2359 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.895000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:22:15.896000 audit[2388]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:15.896000 audit[2388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcb0b8690 a2=0 a3=0 items=0 ppid=2359 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:22:15.897427 kubelet[2359]: I1212 17:22:15.897408 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:22:15.897520 kubelet[2359]: I1212 17:22:15.897510 2359 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:22:15.897741 kubelet[2359]: I1212 17:22:15.897727 2359 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:22:15.897798 kubelet[2359]: I1212 17:22:15.897789 2359 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:22:15.897916 kubelet[2359]: E1212 17:22:15.897887 2359 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:22:15.897000 audit[2389]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:15.897000 audit[2389]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2646870 a2=0 a3=0 items=0 ppid=2359 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:22:15.897000 audit[2390]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:15.897000 audit[2390]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc01f720 a2=0 a3=0 items=0 ppid=2359 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:22:15.899000 audit[2392]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:15.899000 audit[2392]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe57e3a00 a2=0 a3=0 items=0 ppid=2359 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:22:15.899000 audit[2391]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:15.899000 audit[2391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcfa8d3e0 a2=0 a3=0 items=0 ppid=2359 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:22:15.900000 audit[2393]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:15.900000 audit[2393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff320cba0 a2=0 a3=0 items=0 ppid=2359 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:15.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:22:15.906458 kubelet[2359]: I1212 17:22:15.906382 2359 policy_none.go:49] "None policy: Start" Dec 12 17:22:15.906458 kubelet[2359]: I1212 17:22:15.906417 2359 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:22:15.906458 kubelet[2359]: I1212 17:22:15.906455 2359 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:22:15.906867 kubelet[2359]: W1212 17:22:15.906803 2359 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.37:6443: connect: connection refused Dec 12 17:22:15.906899 kubelet[2359]: E1212 17:22:15.906862 2359 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.37:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:22:15.913179 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:22:15.933118 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:22:15.937210 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:22:15.951191 kubelet[2359]: I1212 17:22:15.951093 2359 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:22:15.951367 kubelet[2359]: I1212 17:22:15.951339 2359 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:22:15.951395 kubelet[2359]: I1212 17:22:15.951354 2359 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:22:15.951652 kubelet[2359]: I1212 17:22:15.951640 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:22:15.952379 kubelet[2359]: E1212 17:22:15.952361 2359 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:22:15.952491 kubelet[2359]: E1212 17:22:15.952479 2359 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:22:16.006679 systemd[1]: Created slice kubepods-burstable-pod94dfb95d5373d85ac9345f101a1c2026.slice - libcontainer container kubepods-burstable-pod94dfb95d5373d85ac9345f101a1c2026.slice. Dec 12 17:22:16.029013 kubelet[2359]: E1212 17:22:16.028963 2359 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:22:16.032690 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 12 17:22:16.048456 kubelet[2359]: E1212 17:22:16.048383 2359 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:22:16.051113 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 12 17:22:16.052995 kubelet[2359]: I1212 17:22:16.052669 2359 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:22:16.052995 kubelet[2359]: E1212 17:22:16.052800 2359 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:22:16.053179 kubelet[2359]: E1212 17:22:16.053149 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Dec 12 17:22:16.082870 kubelet[2359]: E1212 17:22:16.082748 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="400ms" Dec 12 17:22:16.180338 kubelet[2359]: I1212 17:22:16.180296 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:16.180338 kubelet[2359]: I1212 17:22:16.180342 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:16.180467 kubelet[2359]: I1212 17:22:16.180361 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:22:16.180467 kubelet[2359]: I1212 17:22:16.180376 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94dfb95d5373d85ac9345f101a1c2026-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"94dfb95d5373d85ac9345f101a1c2026\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:16.180467 kubelet[2359]: I1212 17:22:16.180394 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94dfb95d5373d85ac9345f101a1c2026-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"94dfb95d5373d85ac9345f101a1c2026\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:16.180467 kubelet[2359]: I1212 17:22:16.180410 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:16.180467 kubelet[2359]: I1212 17:22:16.180427 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:16.180623 kubelet[2359]: I1212 17:22:16.180442 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94dfb95d5373d85ac9345f101a1c2026-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"94dfb95d5373d85ac9345f101a1c2026\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:16.180623 kubelet[2359]: I1212 17:22:16.180455 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:16.255061 kubelet[2359]: I1212 17:22:16.255011 2359 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:22:16.255501 kubelet[2359]: E1212 17:22:16.255473 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Dec 12 17:22:16.330153 kubelet[2359]: E1212 17:22:16.330102 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:16.330879 containerd[1581]: time="2025-12-12T17:22:16.330847950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:94dfb95d5373d85ac9345f101a1c2026,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:16.349897 kubelet[2359]: E1212 17:22:16.349276 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:16.351076 containerd[1581]: time="2025-12-12T17:22:16.350388601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:16.354359 kubelet[2359]: E1212 17:22:16.354086 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:16.355135 containerd[1581]: time="2025-12-12T17:22:16.355095462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:16.357107 containerd[1581]: time="2025-12-12T17:22:16.357064709Z" level=info msg="connecting to shim 1b22c7e1b6f2fb93597418af04bfc7e925b5360dba0bf56ca7b252562257055f" address="unix:///run/containerd/s/748b1e7c3cd58a9b2757a35b897878744ab07dacdb59d01f50613ddb0d3a08b4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:16.382199 containerd[1581]: time="2025-12-12T17:22:16.382119939Z" level=info msg="connecting to shim 6cf4ce2fb8859f4bdf146716c6defdf87cf83f9a2e1d3864d24ada5b7d86083c" address="unix:///run/containerd/s/d1857e6a338971ffcb0ab7ac19a85eb947175322917150eeb1aebd3825401f8d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:16.400323 systemd[1]: Started cri-containerd-1b22c7e1b6f2fb93597418af04bfc7e925b5360dba0bf56ca7b252562257055f.scope - libcontainer container 1b22c7e1b6f2fb93597418af04bfc7e925b5360dba0bf56ca7b252562257055f. Dec 12 17:22:16.408540 systemd[1]: Started cri-containerd-6cf4ce2fb8859f4bdf146716c6defdf87cf83f9a2e1d3864d24ada5b7d86083c.scope - libcontainer container 6cf4ce2fb8859f4bdf146716c6defdf87cf83f9a2e1d3864d24ada5b7d86083c. Dec 12 17:22:16.410092 containerd[1581]: time="2025-12-12T17:22:16.410026768Z" level=info msg="connecting to shim 3afb29fef822e514bba7a024615f5f2aa30d8fd2411646ec5877b8665018cc24" address="unix:///run/containerd/s/c0a91635d8f1a75eb5b5e34c1382346e557d54402366c5a2918325dcd6018676" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:16.415000 audit: BPF prog-id=81 op=LOAD Dec 12 17:22:16.416000 audit: BPF prog-id=82 op=LOAD Dec 12 17:22:16.416000 audit[2414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2403 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.416000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323263376531623666326662393335393734313861663034626663 Dec 12 17:22:16.417000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:22:16.417000 audit[2414]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2403 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323263376531623666326662393335393734313861663034626663 Dec 12 17:22:16.417000 audit: BPF prog-id=83 op=LOAD Dec 12 17:22:16.417000 audit[2414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2403 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323263376531623666326662393335393734313861663034626663 Dec 12 17:22:16.420000 audit: BPF prog-id=84 op=LOAD Dec 12 17:22:16.420000 audit[2414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2403 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323263376531623666326662393335393734313861663034626663 Dec 12 17:22:16.420000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:22:16.420000 audit[2414]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2403 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323263376531623666326662393335393734313861663034626663 Dec 12 17:22:16.420000 audit: BPF prog-id=83 op=UNLOAD Dec 12 17:22:16.420000 audit[2414]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2403 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323263376531623666326662393335393734313861663034626663 Dec 12 17:22:16.420000 audit: BPF prog-id=85 op=LOAD Dec 12 17:22:16.420000 audit[2414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2403 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162323263376531623666326662393335393734313861663034626663 Dec 12 17:22:16.426000 audit: BPF prog-id=86 op=LOAD Dec 12 17:22:16.427000 audit: BPF prog-id=87 op=LOAD Dec 12 17:22:16.427000 audit[2445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2431 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663463653266623838353966346264663134363731366336646566 Dec 12 17:22:16.427000 audit: BPF prog-id=87 op=UNLOAD Dec 12 17:22:16.427000 audit[2445]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2431 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663463653266623838353966346264663134363731366336646566 Dec 12 17:22:16.428000 audit: BPF prog-id=88 op=LOAD Dec 12 17:22:16.428000 audit[2445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2431 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663463653266623838353966346264663134363731366336646566 Dec 12 17:22:16.428000 audit: BPF prog-id=89 op=LOAD Dec 12 17:22:16.428000 audit[2445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2431 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663463653266623838353966346264663134363731366336646566 Dec 12 17:22:16.428000 audit: BPF prog-id=89 op=UNLOAD Dec 12 17:22:16.428000 audit[2445]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2431 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663463653266623838353966346264663134363731366336646566 Dec 12 17:22:16.428000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:22:16.428000 audit[2445]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2431 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663463653266623838353966346264663134363731366336646566 Dec 12 17:22:16.428000 audit: BPF prog-id=90 op=LOAD Dec 12 17:22:16.428000 audit[2445]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2431 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663663463653266623838353966346264663134363731366336646566 Dec 12 17:22:16.444584 systemd[1]: Started cri-containerd-3afb29fef822e514bba7a024615f5f2aa30d8fd2411646ec5877b8665018cc24.scope - libcontainer container 3afb29fef822e514bba7a024615f5f2aa30d8fd2411646ec5877b8665018cc24. Dec 12 17:22:16.461129 containerd[1581]: time="2025-12-12T17:22:16.460023481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:94dfb95d5373d85ac9345f101a1c2026,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b22c7e1b6f2fb93597418af04bfc7e925b5360dba0bf56ca7b252562257055f\"" Dec 12 17:22:16.462000 audit: BPF prog-id=91 op=LOAD Dec 12 17:22:16.463000 audit: BPF prog-id=92 op=LOAD Dec 12 17:22:16.463000 audit[2489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=2467 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361666232396665663832326535313462626137613032343631356635 Dec 12 17:22:16.463000 audit: BPF prog-id=92 op=UNLOAD Dec 12 17:22:16.463000 audit[2489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2467 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361666232396665663832326535313462626137613032343631356635 Dec 12 17:22:16.464990 kubelet[2359]: E1212 17:22:16.464967 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:16.465273 containerd[1581]: time="2025-12-12T17:22:16.465113008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cf4ce2fb8859f4bdf146716c6defdf87cf83f9a2e1d3864d24ada5b7d86083c\"" Dec 12 17:22:16.464000 audit: BPF prog-id=93 op=LOAD Dec 12 17:22:16.464000 audit[2489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=2467 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361666232396665663832326535313462626137613032343631356635 Dec 12 17:22:16.464000 audit: BPF prog-id=94 op=LOAD Dec 12 17:22:16.464000 audit[2489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=2467 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361666232396665663832326535313462626137613032343631356635 Dec 12 17:22:16.464000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:22:16.464000 audit[2489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2467 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361666232396665663832326535313462626137613032343631356635 Dec 12 17:22:16.464000 audit: BPF prog-id=93 op=UNLOAD Dec 12 17:22:16.464000 audit[2489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2467 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361666232396665663832326535313462626137613032343631356635 Dec 12 17:22:16.466191 kubelet[2359]: E1212 17:22:16.466047 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:16.465000 audit: BPF prog-id=95 op=LOAD Dec 12 17:22:16.465000 audit[2489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=2467 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361666232396665663832326535313462626137613032343631356635 Dec 12 17:22:16.468799 containerd[1581]: time="2025-12-12T17:22:16.468737922Z" level=info msg="CreateContainer within sandbox \"1b22c7e1b6f2fb93597418af04bfc7e925b5360dba0bf56ca7b252562257055f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:22:16.472710 containerd[1581]: time="2025-12-12T17:22:16.472641854Z" level=info msg="CreateContainer within sandbox \"6cf4ce2fb8859f4bdf146716c6defdf87cf83f9a2e1d3864d24ada5b7d86083c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:22:16.480981 containerd[1581]: time="2025-12-12T17:22:16.480935793Z" level=info msg="Container c6e1629ee881c9b8ef69fd7d84bf77afd808c25d996fe06fcce87873cd708642: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:16.483509 kubelet[2359]: E1212 17:22:16.483469 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.37:6443: connect: connection refused" interval="800ms" Dec 12 17:22:16.488217 containerd[1581]: time="2025-12-12T17:22:16.487274453Z" level=info msg="Container 29f73571a9e6c4e5b9f18974e2f1c8446a3991dd9e1a6d7db64e93555af70722: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:16.497457 containerd[1581]: time="2025-12-12T17:22:16.497409964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"3afb29fef822e514bba7a024615f5f2aa30d8fd2411646ec5877b8665018cc24\"" Dec 12 17:22:16.498553 containerd[1581]: time="2025-12-12T17:22:16.498454674Z" level=info msg="CreateContainer within sandbox \"1b22c7e1b6f2fb93597418af04bfc7e925b5360dba0bf56ca7b252562257055f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6e1629ee881c9b8ef69fd7d84bf77afd808c25d996fe06fcce87873cd708642\"" Dec 12 17:22:16.498657 containerd[1581]: time="2025-12-12T17:22:16.498619426Z" level=info msg="CreateContainer within sandbox \"6cf4ce2fb8859f4bdf146716c6defdf87cf83f9a2e1d3864d24ada5b7d86083c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29f73571a9e6c4e5b9f18974e2f1c8446a3991dd9e1a6d7db64e93555af70722\"" Dec 12 17:22:16.499206 kubelet[2359]: E1212 17:22:16.499173 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:16.499573 containerd[1581]: time="2025-12-12T17:22:16.499513552Z" level=info msg="StartContainer for \"29f73571a9e6c4e5b9f18974e2f1c8446a3991dd9e1a6d7db64e93555af70722\"" Dec 12 17:22:16.499680 containerd[1581]: time="2025-12-12T17:22:16.499657471Z" level=info msg="StartContainer for \"c6e1629ee881c9b8ef69fd7d84bf77afd808c25d996fe06fcce87873cd708642\"" Dec 12 17:22:16.501381 containerd[1581]: time="2025-12-12T17:22:16.501341314Z" level=info msg="connecting to shim c6e1629ee881c9b8ef69fd7d84bf77afd808c25d996fe06fcce87873cd708642" address="unix:///run/containerd/s/748b1e7c3cd58a9b2757a35b897878744ab07dacdb59d01f50613ddb0d3a08b4" protocol=ttrpc version=3 Dec 12 17:22:16.501381 containerd[1581]: time="2025-12-12T17:22:16.501367137Z" level=info msg="connecting to shim 29f73571a9e6c4e5b9f18974e2f1c8446a3991dd9e1a6d7db64e93555af70722" address="unix:///run/containerd/s/d1857e6a338971ffcb0ab7ac19a85eb947175322917150eeb1aebd3825401f8d" protocol=ttrpc version=3 Dec 12 17:22:16.503437 containerd[1581]: time="2025-12-12T17:22:16.503354224Z" level=info msg="CreateContainer within sandbox \"3afb29fef822e514bba7a024615f5f2aa30d8fd2411646ec5877b8665018cc24\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:22:16.512783 containerd[1581]: time="2025-12-12T17:22:16.512735178Z" level=info msg="Container 74b19f8335ff54d0917baf72b75e04304ebe56b6bbea6f3f782c5ffc005b9a90: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:16.520129 containerd[1581]: time="2025-12-12T17:22:16.520083427Z" level=info msg="CreateContainer within sandbox \"3afb29fef822e514bba7a024615f5f2aa30d8fd2411646ec5877b8665018cc24\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"74b19f8335ff54d0917baf72b75e04304ebe56b6bbea6f3f782c5ffc005b9a90\"" Dec 12 17:22:16.520868 containerd[1581]: time="2025-12-12T17:22:16.520702606Z" level=info msg="StartContainer for \"74b19f8335ff54d0917baf72b75e04304ebe56b6bbea6f3f782c5ffc005b9a90\"" Dec 12 17:22:16.521840 containerd[1581]: time="2025-12-12T17:22:16.521800477Z" level=info msg="connecting to shim 74b19f8335ff54d0917baf72b75e04304ebe56b6bbea6f3f782c5ffc005b9a90" address="unix:///run/containerd/s/c0a91635d8f1a75eb5b5e34c1382346e557d54402366c5a2918325dcd6018676" protocol=ttrpc version=3 Dec 12 17:22:16.523445 systemd[1]: Started cri-containerd-29f73571a9e6c4e5b9f18974e2f1c8446a3991dd9e1a6d7db64e93555af70722.scope - libcontainer container 29f73571a9e6c4e5b9f18974e2f1c8446a3991dd9e1a6d7db64e93555af70722. Dec 12 17:22:16.527777 systemd[1]: Started cri-containerd-c6e1629ee881c9b8ef69fd7d84bf77afd808c25d996fe06fcce87873cd708642.scope - libcontainer container c6e1629ee881c9b8ef69fd7d84bf77afd808c25d996fe06fcce87873cd708642. Dec 12 17:22:16.544274 systemd[1]: Started cri-containerd-74b19f8335ff54d0917baf72b75e04304ebe56b6bbea6f3f782c5ffc005b9a90.scope - libcontainer container 74b19f8335ff54d0917baf72b75e04304ebe56b6bbea6f3f782c5ffc005b9a90. Dec 12 17:22:16.546000 audit: BPF prog-id=96 op=LOAD Dec 12 17:22:16.546000 audit: BPF prog-id=97 op=LOAD Dec 12 17:22:16.546000 audit[2532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2431 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663733353731613965366334653562396631383937346532663163 Dec 12 17:22:16.547000 audit: BPF prog-id=97 op=UNLOAD Dec 12 17:22:16.547000 audit[2532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2431 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663733353731613965366334653562396631383937346532663163 Dec 12 17:22:16.548000 audit: BPF prog-id=98 op=LOAD Dec 12 17:22:16.548000 audit[2532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2431 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663733353731613965366334653562396631383937346532663163 Dec 12 17:22:16.548000 audit: BPF prog-id=99 op=LOAD Dec 12 17:22:16.548000 audit[2532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2431 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663733353731613965366334653562396631383937346532663163 Dec 12 17:22:16.548000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:22:16.548000 audit[2532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2431 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663733353731613965366334653562396631383937346532663163 Dec 12 17:22:16.548000 audit: BPF prog-id=98 op=UNLOAD Dec 12 17:22:16.548000 audit[2532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2431 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663733353731613965366334653562396631383937346532663163 Dec 12 17:22:16.548000 audit: BPF prog-id=100 op=LOAD Dec 12 17:22:16.548000 audit[2532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2431 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663733353731613965366334653562396631383937346532663163 Dec 12 17:22:16.549000 audit: BPF prog-id=101 op=LOAD Dec 12 17:22:16.550000 audit: BPF prog-id=102 op=LOAD Dec 12 17:22:16.550000 audit[2531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2403 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653136323965653838316339623865663639666437643834626637 Dec 12 17:22:16.551000 audit: BPF prog-id=102 op=UNLOAD Dec 12 17:22:16.551000 audit[2531]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2403 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653136323965653838316339623865663639666437643834626637 Dec 12 17:22:16.551000 audit: BPF prog-id=103 op=LOAD Dec 12 17:22:16.551000 audit[2531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2403 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653136323965653838316339623865663639666437643834626637 Dec 12 17:22:16.551000 audit: BPF prog-id=104 op=LOAD Dec 12 17:22:16.551000 audit[2531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2403 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653136323965653838316339623865663639666437643834626637 Dec 12 17:22:16.551000 audit: BPF prog-id=104 op=UNLOAD Dec 12 17:22:16.551000 audit[2531]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2403 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653136323965653838316339623865663639666437643834626637 Dec 12 17:22:16.552000 audit: BPF prog-id=103 op=UNLOAD Dec 12 17:22:16.552000 audit[2531]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2403 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653136323965653838316339623865663639666437643834626637 Dec 12 17:22:16.552000 audit: BPF prog-id=105 op=LOAD Dec 12 17:22:16.552000 audit[2531]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2403 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.552000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653136323965653838316339623865663639666437643834626637 Dec 12 17:22:16.557000 audit: BPF prog-id=106 op=LOAD Dec 12 17:22:16.558000 audit: BPF prog-id=107 op=LOAD Dec 12 17:22:16.558000 audit[2554]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2467 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734623139663833333566663534643039313762616637326237356530 Dec 12 17:22:16.558000 audit: BPF prog-id=107 op=UNLOAD Dec 12 17:22:16.558000 audit[2554]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2467 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734623139663833333566663534643039313762616637326237356530 Dec 12 17:22:16.558000 audit: BPF prog-id=108 op=LOAD Dec 12 17:22:16.558000 audit[2554]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2467 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734623139663833333566663534643039313762616637326237356530 Dec 12 17:22:16.558000 audit: BPF prog-id=109 op=LOAD Dec 12 17:22:16.558000 audit[2554]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2467 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734623139663833333566663534643039313762616637326237356530 Dec 12 17:22:16.558000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:22:16.558000 audit[2554]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2467 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734623139663833333566663534643039313762616637326237356530 Dec 12 17:22:16.558000 audit: BPF prog-id=108 op=UNLOAD Dec 12 17:22:16.558000 audit[2554]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2467 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734623139663833333566663534643039313762616637326237356530 Dec 12 17:22:16.558000 audit: BPF prog-id=110 op=LOAD Dec 12 17:22:16.558000 audit[2554]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2467 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:16.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734623139663833333566663534643039313762616637326237356530 Dec 12 17:22:16.599396 containerd[1581]: time="2025-12-12T17:22:16.599344940Z" level=info msg="StartContainer for \"c6e1629ee881c9b8ef69fd7d84bf77afd808c25d996fe06fcce87873cd708642\" returns successfully" Dec 12 17:22:16.600539 containerd[1581]: time="2025-12-12T17:22:16.600395756Z" level=info msg="StartContainer for \"74b19f8335ff54d0917baf72b75e04304ebe56b6bbea6f3f782c5ffc005b9a90\" returns successfully" Dec 12 17:22:16.601345 containerd[1581]: time="2025-12-12T17:22:16.601213053Z" level=info msg="StartContainer for \"29f73571a9e6c4e5b9f18974e2f1c8446a3991dd9e1a6d7db64e93555af70722\" returns successfully" Dec 12 17:22:16.657878 kubelet[2359]: I1212 17:22:16.657846 2359 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:22:16.658374 kubelet[2359]: E1212 17:22:16.658319 2359 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.37:6443/api/v1/nodes\": dial tcp 10.0.0.37:6443: connect: connection refused" node="localhost" Dec 12 17:22:16.906894 kubelet[2359]: E1212 17:22:16.906611 2359 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:22:16.906894 kubelet[2359]: E1212 17:22:16.906749 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:16.909752 kubelet[2359]: E1212 17:22:16.909708 2359 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:22:16.909938 kubelet[2359]: E1212 17:22:16.909828 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:16.912678 kubelet[2359]: E1212 17:22:16.912655 2359 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:22:16.915383 kubelet[2359]: E1212 17:22:16.915312 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:17.459952 kubelet[2359]: I1212 17:22:17.459905 2359 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:22:17.915924 kubelet[2359]: E1212 17:22:17.915771 2359 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:22:17.916491 kubelet[2359]: E1212 17:22:17.916469 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:17.918767 kubelet[2359]: E1212 17:22:17.918715 2359 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:22:17.918885 kubelet[2359]: E1212 17:22:17.918852 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:18.459419 kubelet[2359]: E1212 17:22:18.459350 2359 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:22:18.613887 kubelet[2359]: I1212 17:22:18.613805 2359 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:22:18.613887 kubelet[2359]: E1212 17:22:18.613854 2359 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 12 17:22:18.680285 kubelet[2359]: I1212 17:22:18.680245 2359 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:18.687793 kubelet[2359]: E1212 17:22:18.687747 2359 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:18.687793 kubelet[2359]: I1212 17:22:18.687778 2359 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:18.689601 kubelet[2359]: E1212 17:22:18.689561 2359 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:18.689601 kubelet[2359]: I1212 17:22:18.689587 2359 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:22:18.691560 kubelet[2359]: E1212 17:22:18.691480 2359 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:22:18.867466 kubelet[2359]: I1212 17:22:18.867349 2359 apiserver.go:52] "Watching apiserver" Dec 12 17:22:18.879624 kubelet[2359]: I1212 17:22:18.879586 2359 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:22:20.286942 kubelet[2359]: I1212 17:22:20.286893 2359 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:20.293358 kubelet[2359]: E1212 17:22:20.293194 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:20.800975 systemd[1]: Reload requested from client PID 2633 ('systemctl') (unit session-7.scope)... Dec 12 17:22:20.800991 systemd[1]: Reloading... Dec 12 17:22:20.880135 zram_generator::config[2679]: No configuration found. Dec 12 17:22:20.919945 kubelet[2359]: E1212 17:22:20.919869 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:21.053799 systemd[1]: Reloading finished in 252 ms. Dec 12 17:22:21.084658 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:22:21.103716 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:22:21.104022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:22:21.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:21.104132 systemd[1]: kubelet.service: Consumed 1.843s CPU time, 128M memory peak. Dec 12 17:22:21.109058 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 12 17:22:21.109156 kernel: audit: type=1131 audit(1765560141.103:380): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:21.109183 kernel: audit: type=1334 audit(1765560141.106:381): prog-id=111 op=LOAD Dec 12 17:22:21.109199 kernel: audit: type=1334 audit(1765560141.106:382): prog-id=62 op=UNLOAD Dec 12 17:22:21.109211 kernel: audit: type=1334 audit(1765560141.108:383): prog-id=112 op=LOAD Dec 12 17:22:21.106000 audit: BPF prog-id=111 op=LOAD Dec 12 17:22:21.106000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:22:21.108000 audit: BPF prog-id=112 op=LOAD Dec 12 17:22:21.106476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:22:21.108000 audit: BPF prog-id=113 op=LOAD Dec 12 17:22:21.110491 kernel: audit: type=1334 audit(1765560141.108:384): prog-id=113 op=LOAD Dec 12 17:22:21.110544 kernel: audit: type=1334 audit(1765560141.108:385): prog-id=78 op=UNLOAD Dec 12 17:22:21.108000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:22:21.111162 kernel: audit: type=1334 audit(1765560141.108:386): prog-id=79 op=UNLOAD Dec 12 17:22:21.111199 kernel: audit: type=1334 audit(1765560141.110:387): prog-id=114 op=LOAD Dec 12 17:22:21.111222 kernel: audit: type=1334 audit(1765560141.110:388): prog-id=63 op=UNLOAD Dec 12 17:22:21.108000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:22:21.110000 audit: BPF prog-id=114 op=LOAD Dec 12 17:22:21.110000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:22:21.110000 audit: BPF prog-id=115 op=LOAD Dec 12 17:22:21.112721 kernel: audit: type=1334 audit(1765560141.110:389): prog-id=115 op=LOAD Dec 12 17:22:21.110000 audit: BPF prog-id=116 op=LOAD Dec 12 17:22:21.110000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:22:21.110000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:22:21.112000 audit: BPF prog-id=117 op=LOAD Dec 12 17:22:21.112000 audit: BPF prog-id=66 op=UNLOAD Dec 12 17:22:21.113000 audit: BPF prog-id=118 op=LOAD Dec 12 17:22:21.113000 audit: BPF prog-id=119 op=LOAD Dec 12 17:22:21.113000 audit: BPF prog-id=67 op=UNLOAD Dec 12 17:22:21.113000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:22:21.114000 audit: BPF prog-id=120 op=LOAD Dec 12 17:22:21.114000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:22:21.114000 audit: BPF prog-id=121 op=LOAD Dec 12 17:22:21.114000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:22:21.114000 audit: BPF prog-id=122 op=LOAD Dec 12 17:22:21.114000 audit: BPF prog-id=123 op=LOAD Dec 12 17:22:21.114000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:22:21.114000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:22:21.115000 audit: BPF prog-id=124 op=LOAD Dec 12 17:22:21.131000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:22:21.131000 audit: BPF prog-id=125 op=LOAD Dec 12 17:22:21.131000 audit: BPF prog-id=126 op=LOAD Dec 12 17:22:21.131000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:22:21.131000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:22:21.133000 audit: BPF prog-id=127 op=LOAD Dec 12 17:22:21.133000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:22:21.133000 audit: BPF prog-id=128 op=LOAD Dec 12 17:22:21.133000 audit: BPF prog-id=129 op=LOAD Dec 12 17:22:21.133000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:22:21.133000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:22:21.134000 audit: BPF prog-id=130 op=LOAD Dec 12 17:22:21.134000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:22:21.274554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:22:21.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:21.284387 (kubelet)[2721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:22:21.336527 kubelet[2721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:22:21.336527 kubelet[2721]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:22:21.336527 kubelet[2721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:22:21.336527 kubelet[2721]: I1212 17:22:21.336447 2721 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:22:21.345060 kubelet[2721]: I1212 17:22:21.344694 2721 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:22:21.345060 kubelet[2721]: I1212 17:22:21.344729 2721 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:22:21.345388 kubelet[2721]: I1212 17:22:21.345369 2721 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:22:21.347015 kubelet[2721]: I1212 17:22:21.346986 2721 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 17:22:21.349749 kubelet[2721]: I1212 17:22:21.349716 2721 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:22:21.355485 kubelet[2721]: I1212 17:22:21.355460 2721 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:22:21.358708 kubelet[2721]: I1212 17:22:21.358671 2721 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:22:21.359113 kubelet[2721]: I1212 17:22:21.359069 2721 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:22:21.359367 kubelet[2721]: I1212 17:22:21.359182 2721 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:22:21.359511 kubelet[2721]: I1212 17:22:21.359497 2721 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:22:21.359566 kubelet[2721]: I1212 17:22:21.359559 2721 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:22:21.359666 kubelet[2721]: I1212 17:22:21.359657 2721 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:22:21.359969 kubelet[2721]: I1212 17:22:21.359949 2721 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:22:21.360107 kubelet[2721]: I1212 17:22:21.360092 2721 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:22:21.360196 kubelet[2721]: I1212 17:22:21.360185 2721 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:22:21.360259 kubelet[2721]: I1212 17:22:21.360249 2721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:22:21.361208 kubelet[2721]: I1212 17:22:21.361181 2721 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:22:21.361672 kubelet[2721]: I1212 17:22:21.361651 2721 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:22:21.362467 kubelet[2721]: I1212 17:22:21.362294 2721 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:22:21.362467 kubelet[2721]: I1212 17:22:21.362357 2721 server.go:1287] "Started kubelet" Dec 12 17:22:21.365088 kubelet[2721]: I1212 17:22:21.362620 2721 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:22:21.365088 kubelet[2721]: I1212 17:22:21.362780 2721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:22:21.365088 kubelet[2721]: I1212 17:22:21.363265 2721 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:22:21.365088 kubelet[2721]: I1212 17:22:21.364432 2721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:22:21.365367 kubelet[2721]: I1212 17:22:21.365338 2721 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:22:21.366157 kubelet[2721]: I1212 17:22:21.365777 2721 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:22:21.373006 kubelet[2721]: I1212 17:22:21.372975 2721 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:22:21.373147 kubelet[2721]: I1212 17:22:21.373123 2721 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:22:21.375886 kubelet[2721]: I1212 17:22:21.373257 2721 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:22:21.375886 kubelet[2721]: E1212 17:22:21.373834 2721 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:22:21.385372 kubelet[2721]: I1212 17:22:21.385329 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:22:21.386780 kubelet[2721]: I1212 17:22:21.386021 2721 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:22:21.386987 kubelet[2721]: I1212 17:22:21.386967 2721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:22:21.387075 kubelet[2721]: I1212 17:22:21.387063 2721 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:22:21.387929 kubelet[2721]: I1212 17:22:21.387225 2721 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:22:21.388167 kubelet[2721]: I1212 17:22:21.388068 2721 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:22:21.389181 kubelet[2721]: E1212 17:22:21.389154 2721 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:22:21.391090 kubelet[2721]: I1212 17:22:21.391027 2721 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:22:21.391090 kubelet[2721]: I1212 17:22:21.391082 2721 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:22:21.393179 kubelet[2721]: E1212 17:22:21.387103 2721 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:22:21.437011 kubelet[2721]: I1212 17:22:21.436967 2721 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:22:21.437011 kubelet[2721]: I1212 17:22:21.436998 2721 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:22:21.437011 kubelet[2721]: I1212 17:22:21.437023 2721 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:22:21.437670 kubelet[2721]: I1212 17:22:21.437638 2721 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:22:21.437711 kubelet[2721]: I1212 17:22:21.437663 2721 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:22:21.437711 kubelet[2721]: I1212 17:22:21.437683 2721 policy_none.go:49] "None policy: Start" Dec 12 17:22:21.437711 kubelet[2721]: I1212 17:22:21.437695 2721 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:22:21.437711 kubelet[2721]: I1212 17:22:21.437706 2721 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:22:21.437818 kubelet[2721]: I1212 17:22:21.437801 2721 state_mem.go:75] "Updated machine memory state" Dec 12 17:22:21.442052 kubelet[2721]: I1212 17:22:21.441998 2721 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:22:21.442244 kubelet[2721]: I1212 17:22:21.442226 2721 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:22:21.442287 kubelet[2721]: I1212 17:22:21.442255 2721 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:22:21.442736 kubelet[2721]: I1212 17:22:21.442553 2721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:22:21.444103 kubelet[2721]: E1212 17:22:21.443582 2721 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:22:21.490354 kubelet[2721]: I1212 17:22:21.490312 2721 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:21.490473 kubelet[2721]: I1212 17:22:21.490326 2721 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:22:21.490497 kubelet[2721]: I1212 17:22:21.490477 2721 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:21.499564 kubelet[2721]: E1212 17:22:21.499496 2721 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:21.546746 kubelet[2721]: I1212 17:22:21.546708 2721 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:22:21.553765 kubelet[2721]: I1212 17:22:21.553453 2721 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:22:21.553765 kubelet[2721]: I1212 17:22:21.553569 2721 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:22:21.675769 kubelet[2721]: I1212 17:22:21.674962 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94dfb95d5373d85ac9345f101a1c2026-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"94dfb95d5373d85ac9345f101a1c2026\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:21.675769 kubelet[2721]: I1212 17:22:21.675004 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94dfb95d5373d85ac9345f101a1c2026-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"94dfb95d5373d85ac9345f101a1c2026\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:21.675769 kubelet[2721]: I1212 17:22:21.675026 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94dfb95d5373d85ac9345f101a1c2026-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"94dfb95d5373d85ac9345f101a1c2026\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:21.675769 kubelet[2721]: I1212 17:22:21.675059 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:21.675769 kubelet[2721]: I1212 17:22:21.675080 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:22:21.675980 kubelet[2721]: I1212 17:22:21.675095 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:21.675980 kubelet[2721]: I1212 17:22:21.675109 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:21.675980 kubelet[2721]: I1212 17:22:21.675125 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:21.675980 kubelet[2721]: I1212 17:22:21.675151 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:22:21.797414 kubelet[2721]: E1212 17:22:21.797277 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:21.797599 kubelet[2721]: E1212 17:22:21.797560 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:21.800789 kubelet[2721]: E1212 17:22:21.800759 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:22.361631 kubelet[2721]: I1212 17:22:22.361593 2721 apiserver.go:52] "Watching apiserver" Dec 12 17:22:22.374151 kubelet[2721]: I1212 17:22:22.374102 2721 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:22:22.419975 kubelet[2721]: I1212 17:22:22.419766 2721 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:22.419975 kubelet[2721]: E1212 17:22:22.419765 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:22.420137 kubelet[2721]: E1212 17:22:22.419994 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:22.437673 kubelet[2721]: E1212 17:22:22.437392 2721 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:22:22.437673 kubelet[2721]: E1212 17:22:22.437577 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:22.458695 kubelet[2721]: I1212 17:22:22.458632 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.458598116 podStartE2EDuration="2.458598116s" podCreationTimestamp="2025-12-12 17:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:22:22.458525428 +0000 UTC m=+1.169329646" watchObservedRunningTime="2025-12-12 17:22:22.458598116 +0000 UTC m=+1.169402374" Dec 12 17:22:22.467554 kubelet[2721]: I1212 17:22:22.467389 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4673722630000001 podStartE2EDuration="1.467372263s" podCreationTimestamp="2025-12-12 17:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:22:22.466298896 +0000 UTC m=+1.177103154" watchObservedRunningTime="2025-12-12 17:22:22.467372263 +0000 UTC m=+1.178176521" Dec 12 17:22:22.488239 kubelet[2721]: I1212 17:22:22.488163 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.488143654 podStartE2EDuration="1.488143654s" podCreationTimestamp="2025-12-12 17:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:22:22.476245111 +0000 UTC m=+1.187049369" watchObservedRunningTime="2025-12-12 17:22:22.488143654 +0000 UTC m=+1.198947912" Dec 12 17:22:23.423055 kubelet[2721]: E1212 17:22:23.422311 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:23.423055 kubelet[2721]: E1212 17:22:23.422897 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:24.424046 kubelet[2721]: E1212 17:22:24.424003 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:26.701823 kubelet[2721]: I1212 17:22:26.701745 2721 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:22:26.702371 containerd[1581]: time="2025-12-12T17:22:26.702270289Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:22:26.702551 kubelet[2721]: I1212 17:22:26.702482 2721 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:22:26.778138 kubelet[2721]: E1212 17:22:26.778030 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:27.429441 kubelet[2721]: E1212 17:22:27.429394 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:27.545199 systemd[1]: Created slice kubepods-besteffort-pod1f51dd84_7b2b_48f6_b182_bcf25bb05073.slice - libcontainer container kubepods-besteffort-pod1f51dd84_7b2b_48f6_b182_bcf25bb05073.slice. Dec 12 17:22:27.612637 kubelet[2721]: I1212 17:22:27.612554 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vthm\" (UniqueName: \"kubernetes.io/projected/1f51dd84-7b2b-48f6-b182-bcf25bb05073-kube-api-access-6vthm\") pod \"kube-proxy-hslgr\" (UID: \"1f51dd84-7b2b-48f6-b182-bcf25bb05073\") " pod="kube-system/kube-proxy-hslgr" Dec 12 17:22:27.612845 kubelet[2721]: I1212 17:22:27.612662 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f51dd84-7b2b-48f6-b182-bcf25bb05073-kube-proxy\") pod \"kube-proxy-hslgr\" (UID: \"1f51dd84-7b2b-48f6-b182-bcf25bb05073\") " pod="kube-system/kube-proxy-hslgr" Dec 12 17:22:27.612845 kubelet[2721]: I1212 17:22:27.612701 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f51dd84-7b2b-48f6-b182-bcf25bb05073-xtables-lock\") pod \"kube-proxy-hslgr\" (UID: \"1f51dd84-7b2b-48f6-b182-bcf25bb05073\") " pod="kube-system/kube-proxy-hslgr" Dec 12 17:22:27.612845 kubelet[2721]: I1212 17:22:27.612719 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f51dd84-7b2b-48f6-b182-bcf25bb05073-lib-modules\") pod \"kube-proxy-hslgr\" (UID: \"1f51dd84-7b2b-48f6-b182-bcf25bb05073\") " pod="kube-system/kube-proxy-hslgr" Dec 12 17:22:27.812768 systemd[1]: Created slice kubepods-besteffort-podee91dc52_deef_40bb_b97a_a4af1ad86b33.slice - libcontainer container kubepods-besteffort-podee91dc52_deef_40bb_b97a_a4af1ad86b33.slice. Dec 12 17:22:27.859065 kubelet[2721]: E1212 17:22:27.858870 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:27.860067 containerd[1581]: time="2025-12-12T17:22:27.859958904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hslgr,Uid:1f51dd84-7b2b-48f6-b182-bcf25bb05073,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:27.881768 containerd[1581]: time="2025-12-12T17:22:27.881698853Z" level=info msg="connecting to shim 2f07e0a5fb09d203a949e1edc0246d892c59ea4965cc17d6918ced6e03521c3e" address="unix:///run/containerd/s/112d6f05f3528d3c01d55c0f2325d6710aa9e6d449ab615941240d09b0b96772" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:27.910514 systemd[1]: Started cri-containerd-2f07e0a5fb09d203a949e1edc0246d892c59ea4965cc17d6918ced6e03521c3e.scope - libcontainer container 2f07e0a5fb09d203a949e1edc0246d892c59ea4965cc17d6918ced6e03521c3e. Dec 12 17:22:27.914847 kubelet[2721]: I1212 17:22:27.914803 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmwp\" (UniqueName: \"kubernetes.io/projected/ee91dc52-deef-40bb-b97a-a4af1ad86b33-kube-api-access-nnmwp\") pod \"tigera-operator-7dcd859c48-chcbz\" (UID: \"ee91dc52-deef-40bb-b97a-a4af1ad86b33\") " pod="tigera-operator/tigera-operator-7dcd859c48-chcbz" Dec 12 17:22:27.914847 kubelet[2721]: I1212 17:22:27.914853 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee91dc52-deef-40bb-b97a-a4af1ad86b33-var-lib-calico\") pod \"tigera-operator-7dcd859c48-chcbz\" (UID: \"ee91dc52-deef-40bb-b97a-a4af1ad86b33\") " pod="tigera-operator/tigera-operator-7dcd859c48-chcbz" Dec 12 17:22:27.920000 audit: BPF prog-id=131 op=LOAD Dec 12 17:22:27.922428 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:22:27.922591 kernel: audit: type=1334 audit(1765560147.920:422): prog-id=131 op=LOAD Dec 12 17:22:27.922000 audit: BPF prog-id=132 op=LOAD Dec 12 17:22:27.922000 audit[2792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.926882 kernel: audit: type=1334 audit(1765560147.922:423): prog-id=132 op=LOAD Dec 12 17:22:27.926945 kernel: audit: type=1300 audit(1765560147.922:423): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.926963 kernel: audit: type=1327 audit(1765560147.922:423): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.929774 kernel: audit: type=1334 audit(1765560147.922:424): prog-id=132 op=UNLOAD Dec 12 17:22:27.922000 audit: BPF prog-id=132 op=UNLOAD Dec 12 17:22:27.922000 audit[2792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.936216 kernel: audit: type=1300 audit(1765560147.922:424): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.936341 kernel: audit: type=1327 audit(1765560147.922:424): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.941276 kernel: audit: type=1334 audit(1765560147.922:425): prog-id=133 op=LOAD Dec 12 17:22:27.922000 audit: BPF prog-id=133 op=LOAD Dec 12 17:22:27.922000 audit[2792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.944831 kernel: audit: type=1300 audit(1765560147.922:425): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.948961 kernel: audit: type=1327 audit(1765560147.922:425): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.922000 audit: BPF prog-id=134 op=LOAD Dec 12 17:22:27.922000 audit[2792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.925000 audit: BPF prog-id=134 op=UNLOAD Dec 12 17:22:27.925000 audit[2792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.925000 audit: BPF prog-id=133 op=UNLOAD Dec 12 17:22:27.925000 audit[2792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.925000 audit: BPF prog-id=135 op=LOAD Dec 12 17:22:27.925000 audit[2792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2782 pid=2792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:27.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266303765306135666230396432303361393439653165646330323436 Dec 12 17:22:27.961099 containerd[1581]: time="2025-12-12T17:22:27.961057660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hslgr,Uid:1f51dd84-7b2b-48f6-b182-bcf25bb05073,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f07e0a5fb09d203a949e1edc0246d892c59ea4965cc17d6918ced6e03521c3e\"" Dec 12 17:22:27.962070 kubelet[2721]: E1212 17:22:27.962019 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:27.964983 containerd[1581]: time="2025-12-12T17:22:27.964946655Z" level=info msg="CreateContainer within sandbox \"2f07e0a5fb09d203a949e1edc0246d892c59ea4965cc17d6918ced6e03521c3e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:22:27.984423 containerd[1581]: time="2025-12-12T17:22:27.984030674Z" level=info msg="Container a01c82607fd5b4b53e6e356029144ee9ddfac26390e3a6479aa08e04c10ad283: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:27.991666 containerd[1581]: time="2025-12-12T17:22:27.991610342Z" level=info msg="CreateContainer within sandbox \"2f07e0a5fb09d203a949e1edc0246d892c59ea4965cc17d6918ced6e03521c3e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a01c82607fd5b4b53e6e356029144ee9ddfac26390e3a6479aa08e04c10ad283\"" Dec 12 17:22:27.992240 containerd[1581]: time="2025-12-12T17:22:27.992212792Z" level=info msg="StartContainer for \"a01c82607fd5b4b53e6e356029144ee9ddfac26390e3a6479aa08e04c10ad283\"" Dec 12 17:22:27.993997 containerd[1581]: time="2025-12-12T17:22:27.993886231Z" level=info msg="connecting to shim a01c82607fd5b4b53e6e356029144ee9ddfac26390e3a6479aa08e04c10ad283" address="unix:///run/containerd/s/112d6f05f3528d3c01d55c0f2325d6710aa9e6d449ab615941240d09b0b96772" protocol=ttrpc version=3 Dec 12 17:22:28.017362 systemd[1]: Started cri-containerd-a01c82607fd5b4b53e6e356029144ee9ddfac26390e3a6479aa08e04c10ad283.scope - libcontainer container a01c82607fd5b4b53e6e356029144ee9ddfac26390e3a6479aa08e04c10ad283. Dec 12 17:22:28.086000 audit: BPF prog-id=136 op=LOAD Dec 12 17:22:28.086000 audit[2818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2782 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130316338323630376664356234623533653665333536303239313434 Dec 12 17:22:28.086000 audit: BPF prog-id=137 op=LOAD Dec 12 17:22:28.086000 audit[2818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2782 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130316338323630376664356234623533653665333536303239313434 Dec 12 17:22:28.086000 audit: BPF prog-id=137 op=UNLOAD Dec 12 17:22:28.086000 audit[2818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2782 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130316338323630376664356234623533653665333536303239313434 Dec 12 17:22:28.086000 audit: BPF prog-id=136 op=UNLOAD Dec 12 17:22:28.086000 audit[2818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2782 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130316338323630376664356234623533653665333536303239313434 Dec 12 17:22:28.086000 audit: BPF prog-id=138 op=LOAD Dec 12 17:22:28.086000 audit[2818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2782 pid=2818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.086000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130316338323630376664356234623533653665333536303239313434 Dec 12 17:22:28.111973 containerd[1581]: time="2025-12-12T17:22:28.111927554Z" level=info msg="StartContainer for \"a01c82607fd5b4b53e6e356029144ee9ddfac26390e3a6479aa08e04c10ad283\" returns successfully" Dec 12 17:22:28.120389 containerd[1581]: time="2025-12-12T17:22:28.120346663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-chcbz,Uid:ee91dc52-deef-40bb-b97a-a4af1ad86b33,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:22:28.130648 kubelet[2721]: E1212 17:22:28.130372 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:28.168493 containerd[1581]: time="2025-12-12T17:22:28.168273310Z" level=info msg="connecting to shim baf73007930dad765dab00e5a26d191aa2271ce5e7461c7cdf7d99baeaaf5b14" address="unix:///run/containerd/s/d5727760c4ff278d25a5eb28311952ec61b5d149c8405fedc241fa2745a7b540" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:28.196302 systemd[1]: Started cri-containerd-baf73007930dad765dab00e5a26d191aa2271ce5e7461c7cdf7d99baeaaf5b14.scope - libcontainer container baf73007930dad765dab00e5a26d191aa2271ce5e7461c7cdf7d99baeaaf5b14. Dec 12 17:22:28.208000 audit: BPF prog-id=139 op=LOAD Dec 12 17:22:28.208000 audit: BPF prog-id=140 op=LOAD Dec 12 17:22:28.208000 audit[2871]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2858 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663733303037393330646164373635646162303065356132366431 Dec 12 17:22:28.209000 audit: BPF prog-id=140 op=UNLOAD Dec 12 17:22:28.209000 audit[2871]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663733303037393330646164373635646162303065356132366431 Dec 12 17:22:28.209000 audit: BPF prog-id=141 op=LOAD Dec 12 17:22:28.209000 audit[2871]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2858 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663733303037393330646164373635646162303065356132366431 Dec 12 17:22:28.209000 audit: BPF prog-id=142 op=LOAD Dec 12 17:22:28.209000 audit[2871]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2858 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663733303037393330646164373635646162303065356132366431 Dec 12 17:22:28.209000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:22:28.209000 audit[2871]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663733303037393330646164373635646162303065356132366431 Dec 12 17:22:28.209000 audit: BPF prog-id=141 op=UNLOAD Dec 12 17:22:28.209000 audit[2871]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663733303037393330646164373635646162303065356132366431 Dec 12 17:22:28.209000 audit: BPF prog-id=143 op=LOAD Dec 12 17:22:28.209000 audit[2871]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2858 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261663733303037393330646164373635646162303065356132366431 Dec 12 17:22:28.238125 containerd[1581]: time="2025-12-12T17:22:28.238077758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-chcbz,Uid:ee91dc52-deef-40bb-b97a-a4af1ad86b33,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"baf73007930dad765dab00e5a26d191aa2271ce5e7461c7cdf7d99baeaaf5b14\"" Dec 12 17:22:28.240225 containerd[1581]: time="2025-12-12T17:22:28.240174503Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:22:28.285000 audit[2929]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.285000 audit[2929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff321a2d0 a2=0 a3=1 items=0 ppid=2830 pid=2929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.285000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:22:28.286000 audit[2930]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.286000 audit[2930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe889b7c0 a2=0 a3=1 items=0 ppid=2830 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:22:28.286000 audit[2931]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.286000 audit[2931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff53ad510 a2=0 a3=1 items=0 ppid=2830 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:22:28.287000 audit[2932]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.287000 audit[2932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc12c19c0 a2=0 a3=1 items=0 ppid=2830 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:22:28.289000 audit[2934]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=2934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.289000 audit[2934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1ba5d00 a2=0 a3=1 items=0 ppid=2830 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:22:28.290000 audit[2933]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.290000 audit[2933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc30fa630 a2=0 a3=1 items=0 ppid=2830 pid=2933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.290000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:22:28.389000 audit[2936]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.389000 audit[2936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffeee3c8a0 a2=0 a3=1 items=0 ppid=2830 pid=2936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.389000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:22:28.392000 audit[2938]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.392000 audit[2938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffe7bb490 a2=0 a3=1 items=0 ppid=2830 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 12 17:22:28.397000 audit[2941]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.397000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe8514b70 a2=0 a3=1 items=0 ppid=2830 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 12 17:22:28.399000 audit[2942]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.399000 audit[2942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5199a40 a2=0 a3=1 items=0 ppid=2830 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:22:28.402000 audit[2944]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.402000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea8eac40 a2=0 a3=1 items=0 ppid=2830 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.402000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:22:28.403000 audit[2945]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.403000 audit[2945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8aed650 a2=0 a3=1 items=0 ppid=2830 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:22:28.407000 audit[2947]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.407000 audit[2947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffed4aea30 a2=0 a3=1 items=0 ppid=2830 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.407000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:22:28.411000 audit[2950]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.411000 audit[2950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc4e2e3d0 a2=0 a3=1 items=0 ppid=2830 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 12 17:22:28.413000 audit[2951]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.413000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc3d82e0 a2=0 a3=1 items=0 ppid=2830 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:22:28.416000 audit[2953]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.416000 audit[2953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffec193740 a2=0 a3=1 items=0 ppid=2830 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:22:28.417000 audit[2954]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.417000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6d7dc30 a2=0 a3=1 items=0 ppid=2830 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.417000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:22:28.420000 audit[2956]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.420000 audit[2956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffecc94a30 a2=0 a3=1 items=0 ppid=2830 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.420000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:22:28.424000 audit[2959]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.424000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe705cce0 a2=0 a3=1 items=0 ppid=2830 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.424000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:22:28.429000 audit[2962]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.429000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd872d00 a2=0 a3=1 items=0 ppid=2830 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.429000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:22:28.431000 audit[2963]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.431000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcb3cca10 a2=0 a3=1 items=0 ppid=2830 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.431000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:22:28.436182 kubelet[2721]: E1212 17:22:28.435999 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:28.435000 audit[2965]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.435000 audit[2965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcb91b7d0 a2=0 a3=1 items=0 ppid=2830 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:22:28.437189 kubelet[2721]: E1212 17:22:28.437168 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:28.441000 audit[2968]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.441000 audit[2968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc9f2aff0 a2=0 a3=1 items=0 ppid=2830 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:22:28.442000 audit[2969]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.442000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea0ca4b0 a2=0 a3=1 items=0 ppid=2830 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:22:28.445000 audit[2971]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:22:28.445000 audit[2971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd0046410 a2=0 a3=1 items=0 ppid=2830 pid=2971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:22:28.466000 audit[2977]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:28.466000 audit[2977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc413a950 a2=0 a3=1 items=0 ppid=2830 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:28.476000 audit[2977]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=2977 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:28.476000 audit[2977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc413a950 a2=0 a3=1 items=0 ppid=2830 pid=2977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:28.479000 audit[2982]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.479000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd1611db0 a2=0 a3=1 items=0 ppid=2830 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:22:28.484000 audit[2984]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.484000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff400c510 a2=0 a3=1 items=0 ppid=2830 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 12 17:22:28.489000 audit[2987]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.489000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdfef01e0 a2=0 a3=1 items=0 ppid=2830 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.489000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 12 17:22:28.491000 audit[2988]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.491000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1dc09f0 a2=0 a3=1 items=0 ppid=2830 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.491000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:22:28.494000 audit[2990]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.494000 audit[2990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd0ca1dc0 a2=0 a3=1 items=0 ppid=2830 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:22:28.495000 audit[2991]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.495000 audit[2991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd50ea5f0 a2=0 a3=1 items=0 ppid=2830 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.495000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:22:28.498000 audit[2993]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.498000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe3a68560 a2=0 a3=1 items=0 ppid=2830 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.498000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 12 17:22:28.502000 audit[2996]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.502000 audit[2996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffd498ab0 a2=0 a3=1 items=0 ppid=2830 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.502000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 12 17:22:28.504000 audit[2997]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.504000 audit[2997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeeb44a80 a2=0 a3=1 items=0 ppid=2830 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.504000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:22:28.507000 audit[2999]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.507000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe862dee0 a2=0 a3=1 items=0 ppid=2830 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.507000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:22:28.508000 audit[3000]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.508000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3c199f0 a2=0 a3=1 items=0 ppid=2830 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.508000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:22:28.513000 audit[3002]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.513000 audit[3002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffed0a44b0 a2=0 a3=1 items=0 ppid=2830 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.513000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 12 17:22:28.517000 audit[3005]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.517000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd948b820 a2=0 a3=1 items=0 ppid=2830 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.517000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 12 17:22:28.522000 audit[3008]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.522000 audit[3008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd2fb2950 a2=0 a3=1 items=0 ppid=2830 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 12 17:22:28.524000 audit[3009]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.524000 audit[3009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe9fb89c0 a2=0 a3=1 items=0 ppid=2830 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.524000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:22:28.527000 audit[3011]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.527000 audit[3011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe71651d0 a2=0 a3=1 items=0 ppid=2830 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:22:28.532000 audit[3014]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.532000 audit[3014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc20ea1e0 a2=0 a3=1 items=0 ppid=2830 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.532000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:22:28.534000 audit[3015]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.534000 audit[3015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb9e1a50 a2=0 a3=1 items=0 ppid=2830 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:22:28.537000 audit[3017]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.537000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe57e6550 a2=0 a3=1 items=0 ppid=2830 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:22:28.540000 audit[3018]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.540000 audit[3018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd56117f0 a2=0 a3=1 items=0 ppid=2830 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:22:28.544000 audit[3020]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.544000 audit[3020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffccf6d70 a2=0 a3=1 items=0 ppid=2830 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.544000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:22:28.549000 audit[3023]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:22:28.549000 audit[3023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe979c310 a2=0 a3=1 items=0 ppid=2830 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.549000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:22:28.553000 audit[3025]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:22:28.553000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd9617ce0 a2=0 a3=1 items=0 ppid=2830 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.553000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:28.553000 audit[3025]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:22:28.553000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd9617ce0 a2=0 a3=1 items=0 ppid=2830 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:28.553000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:29.590093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358881452.mount: Deactivated successfully. Dec 12 17:22:31.052940 containerd[1581]: time="2025-12-12T17:22:31.052257298Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:31.052940 containerd[1581]: time="2025-12-12T17:22:31.052910890Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 12 17:22:31.053559 containerd[1581]: time="2025-12-12T17:22:31.053534837Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:31.056187 containerd[1581]: time="2025-12-12T17:22:31.056143565Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:31.056688 containerd[1581]: time="2025-12-12T17:22:31.056640330Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.816419219s" Dec 12 17:22:31.056688 containerd[1581]: time="2025-12-12T17:22:31.056674696Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:22:31.071520 containerd[1581]: time="2025-12-12T17:22:31.071283044Z" level=info msg="CreateContainer within sandbox \"baf73007930dad765dab00e5a26d191aa2271ce5e7461c7cdf7d99baeaaf5b14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:22:31.082174 containerd[1581]: time="2025-12-12T17:22:31.082119105Z" level=info msg="Container 5af5917b688305e1547c3f410bf001e74817dae0001e2ec085a6f12aa9de6a01: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:31.084896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052054215.mount: Deactivated successfully. Dec 12 17:22:31.092541 containerd[1581]: time="2025-12-12T17:22:31.092408471Z" level=info msg="CreateContainer within sandbox \"baf73007930dad765dab00e5a26d191aa2271ce5e7461c7cdf7d99baeaaf5b14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5af5917b688305e1547c3f410bf001e74817dae0001e2ec085a6f12aa9de6a01\"" Dec 12 17:22:31.097574 containerd[1581]: time="2025-12-12T17:22:31.097521869Z" level=info msg="StartContainer for \"5af5917b688305e1547c3f410bf001e74817dae0001e2ec085a6f12aa9de6a01\"" Dec 12 17:22:31.098547 containerd[1581]: time="2025-12-12T17:22:31.098521641Z" level=info msg="connecting to shim 5af5917b688305e1547c3f410bf001e74817dae0001e2ec085a6f12aa9de6a01" address="unix:///run/containerd/s/d5727760c4ff278d25a5eb28311952ec61b5d149c8405fedc241fa2745a7b540" protocol=ttrpc version=3 Dec 12 17:22:31.140253 systemd[1]: Started cri-containerd-5af5917b688305e1547c3f410bf001e74817dae0001e2ec085a6f12aa9de6a01.scope - libcontainer container 5af5917b688305e1547c3f410bf001e74817dae0001e2ec085a6f12aa9de6a01. Dec 12 17:22:31.149000 audit: BPF prog-id=144 op=LOAD Dec 12 17:22:31.149000 audit: BPF prog-id=145 op=LOAD Dec 12 17:22:31.149000 audit[3034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2858 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:31.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561663539313762363838333035653135343763336634313062663030 Dec 12 17:22:31.149000 audit: BPF prog-id=145 op=UNLOAD Dec 12 17:22:31.149000 audit[3034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:31.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561663539313762363838333035653135343763336634313062663030 Dec 12 17:22:31.150000 audit: BPF prog-id=146 op=LOAD Dec 12 17:22:31.150000 audit[3034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2858 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:31.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561663539313762363838333035653135343763336634313062663030 Dec 12 17:22:31.150000 audit: BPF prog-id=147 op=LOAD Dec 12 17:22:31.150000 audit[3034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2858 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:31.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561663539313762363838333035653135343763336634313062663030 Dec 12 17:22:31.150000 audit: BPF prog-id=147 op=UNLOAD Dec 12 17:22:31.150000 audit[3034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:31.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561663539313762363838333035653135343763336634313062663030 Dec 12 17:22:31.150000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:22:31.150000 audit[3034]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2858 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:31.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561663539313762363838333035653135343763336634313062663030 Dec 12 17:22:31.150000 audit: BPF prog-id=148 op=LOAD Dec 12 17:22:31.150000 audit[3034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2858 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:31.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561663539313762363838333035653135343763336634313062663030 Dec 12 17:22:31.172987 containerd[1581]: time="2025-12-12T17:22:31.172530307Z" level=info msg="StartContainer for \"5af5917b688305e1547c3f410bf001e74817dae0001e2ec085a6f12aa9de6a01\" returns successfully" Dec 12 17:22:31.456993 kubelet[2721]: I1212 17:22:31.455913 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hslgr" podStartSLOduration=4.455894237 podStartE2EDuration="4.455894237s" podCreationTimestamp="2025-12-12 17:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:22:28.61402922 +0000 UTC m=+7.324833478" watchObservedRunningTime="2025-12-12 17:22:31.455894237 +0000 UTC m=+10.166698495" Dec 12 17:22:34.152647 kubelet[2721]: E1212 17:22:34.151059 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:34.167583 kubelet[2721]: I1212 17:22:34.167519 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-chcbz" podStartSLOduration=4.34260096 podStartE2EDuration="7.167495413s" podCreationTimestamp="2025-12-12 17:22:27 +0000 UTC" firstStartedPulling="2025-12-12 17:22:28.239592665 +0000 UTC m=+6.950396923" lastFinishedPulling="2025-12-12 17:22:31.064487118 +0000 UTC m=+9.775291376" observedRunningTime="2025-12-12 17:22:31.457656419 +0000 UTC m=+10.168460677" watchObservedRunningTime="2025-12-12 17:22:34.167495413 +0000 UTC m=+12.878299671" Dec 12 17:22:34.451884 kubelet[2721]: E1212 17:22:34.451450 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:36.566898 sudo[1797]: pam_unix(sudo:session): session closed for user root Dec 12 17:22:36.566000 audit[1797]: USER_END pid=1797 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:22:36.572220 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 17:22:36.572308 kernel: audit: type=1106 audit(1765560156.566:502): pid=1797 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:22:36.566000 audit[1797]: CRED_DISP pid=1797 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:22:36.574304 sshd[1796]: Connection closed by 10.0.0.1 port 47320 Dec 12 17:22:36.573267 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Dec 12 17:22:36.581416 kernel: audit: type=1104 audit(1765560156.566:503): pid=1797 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:22:36.581506 kernel: audit: type=1106 audit(1765560156.576:504): pid=1793 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:22:36.576000 audit[1793]: USER_END pid=1793 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:22:36.580973 systemd[1]: sshd@6-10.0.0.37:22-10.0.0.1:47320.service: Deactivated successfully. Dec 12 17:22:36.577000 audit[1793]: CRED_DISP pid=1793 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:22:36.584800 kernel: audit: type=1104 audit(1765560156.577:505): pid=1793 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:22:36.584786 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:22:36.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.37:22-10.0.0.1:47320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:36.585695 systemd[1]: session-7.scope: Consumed 5.455s CPU time, 212.7M memory peak. Dec 12 17:22:36.587361 kernel: audit: type=1131 audit(1765560156.579:506): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.37:22-10.0.0.1:47320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:22:36.587455 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:22:36.588458 systemd-logind[1562]: Removed session 7. Dec 12 17:22:37.414000 audit[3124]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:37.421088 kernel: audit: type=1325 audit(1765560157.414:507): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:37.414000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd5af5310 a2=0 a3=1 items=0 ppid=2830 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:37.429061 kernel: audit: type=1300 audit(1765560157.414:507): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd5af5310 a2=0 a3=1 items=0 ppid=2830 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:37.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:37.432070 kernel: audit: type=1327 audit(1765560157.414:507): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:37.435000 audit[3124]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:37.439068 kernel: audit: type=1325 audit(1765560157.435:508): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:37.435000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd5af5310 a2=0 a3=1 items=0 ppid=2830 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:37.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:37.448059 kernel: audit: type=1300 audit(1765560157.435:508): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd5af5310 a2=0 a3=1 items=0 ppid=2830 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:37.462000 audit[3126]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:37.462000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffed273bb0 a2=0 a3=1 items=0 ppid=2830 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:37.462000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:37.477000 audit[3126]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:37.477000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed273bb0 a2=0 a3=1 items=0 ppid=2830 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:37.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:39.535180 update_engine[1564]: I20251212 17:22:39.535099 1564 update_attempter.cc:509] Updating boot flags... Dec 12 17:22:40.339000 audit[3146]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:40.339000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff4d0d880 a2=0 a3=1 items=0 ppid=2830 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:40.339000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:40.343000 audit[3146]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:40.343000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff4d0d880 a2=0 a3=1 items=0 ppid=2830 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:40.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:40.363000 audit[3148]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:40.363000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff2e07320 a2=0 a3=1 items=0 ppid=2830 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:40.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:40.373000 audit[3148]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:40.373000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff2e07320 a2=0 a3=1 items=0 ppid=2830 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:40.373000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:41.384000 audit[3150]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:41.384000 audit[3150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcaf3cbc0 a2=0 a3=1 items=0 ppid=2830 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:41.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:41.391000 audit[3150]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:41.391000 audit[3150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcaf3cbc0 a2=0 a3=1 items=0 ppid=2830 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:41.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:43.437000 audit[3152]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:43.439677 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 12 17:22:43.439747 kernel: audit: type=1325 audit(1765560163.437:517): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:43.437000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdbeb3100 a2=0 a3=1 items=0 ppid=2830 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:43.446160 kernel: audit: type=1300 audit(1765560163.437:517): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdbeb3100 a2=0 a3=1 items=0 ppid=2830 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:43.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:43.448083 kernel: audit: type=1327 audit(1765560163.437:517): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:43.449000 audit[3152]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:43.449000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdbeb3100 a2=0 a3=1 items=0 ppid=2830 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:43.456892 kernel: audit: type=1325 audit(1765560163.449:518): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:43.457074 kernel: audit: type=1300 audit(1765560163.449:518): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdbeb3100 a2=0 a3=1 items=0 ppid=2830 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:43.457119 kernel: audit: type=1327 audit(1765560163.449:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:43.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:43.473000 audit[3154]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:43.473000 audit[3154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdc8a3440 a2=0 a3=1 items=0 ppid=2830 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:43.481074 kernel: audit: type=1325 audit(1765560163.473:519): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:43.481221 kernel: audit: type=1300 audit(1765560163.473:519): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdc8a3440 a2=0 a3=1 items=0 ppid=2830 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:43.481238 kernel: audit: type=1327 audit(1765560163.473:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:43.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:43.483000 audit[3154]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:43.487885 kernel: audit: type=1325 audit(1765560163.483:520): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:43.483000 audit[3154]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdc8a3440 a2=0 a3=1 items=0 ppid=2830 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:43.483000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:43.487614 systemd[1]: Created slice kubepods-besteffort-pod5a7183bd_9c6c_4b83_b404_89bbf58b5e87.slice - libcontainer container kubepods-besteffort-pod5a7183bd_9c6c_4b83_b404_89bbf58b5e87.slice. Dec 12 17:22:43.518597 kubelet[2721]: I1212 17:22:43.518527 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5a7183bd-9c6c-4b83-b404-89bbf58b5e87-typha-certs\") pod \"calico-typha-758c457c8b-w55g6\" (UID: \"5a7183bd-9c6c-4b83-b404-89bbf58b5e87\") " pod="calico-system/calico-typha-758c457c8b-w55g6" Dec 12 17:22:43.518989 kubelet[2721]: I1212 17:22:43.518771 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a7183bd-9c6c-4b83-b404-89bbf58b5e87-tigera-ca-bundle\") pod \"calico-typha-758c457c8b-w55g6\" (UID: \"5a7183bd-9c6c-4b83-b404-89bbf58b5e87\") " pod="calico-system/calico-typha-758c457c8b-w55g6" Dec 12 17:22:43.518989 kubelet[2721]: I1212 17:22:43.518926 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdfl\" (UniqueName: \"kubernetes.io/projected/5a7183bd-9c6c-4b83-b404-89bbf58b5e87-kube-api-access-rfdfl\") pod \"calico-typha-758c457c8b-w55g6\" (UID: \"5a7183bd-9c6c-4b83-b404-89bbf58b5e87\") " pod="calico-system/calico-typha-758c457c8b-w55g6" Dec 12 17:22:43.699730 systemd[1]: Created slice kubepods-besteffort-pod3b1f1dcc_ac3b_40c4_bef6_2c87cb638ca0.slice - libcontainer container kubepods-besteffort-pod3b1f1dcc_ac3b_40c4_bef6_2c87cb638ca0.slice. Dec 12 17:22:43.720128 kubelet[2721]: I1212 17:22:43.720072 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-flexvol-driver-host\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720128 kubelet[2721]: I1212 17:22:43.720128 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-lib-modules\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720308 kubelet[2721]: I1212 17:22:43.720154 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-tigera-ca-bundle\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720308 kubelet[2721]: I1212 17:22:43.720170 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-var-run-calico\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720308 kubelet[2721]: I1212 17:22:43.720189 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmpwr\" (UniqueName: \"kubernetes.io/projected/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-kube-api-access-bmpwr\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720308 kubelet[2721]: I1212 17:22:43.720224 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-node-certs\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720308 kubelet[2721]: I1212 17:22:43.720239 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-xtables-lock\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720414 kubelet[2721]: I1212 17:22:43.720254 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-cni-log-dir\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720414 kubelet[2721]: I1212 17:22:43.720269 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-policysync\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720414 kubelet[2721]: I1212 17:22:43.720309 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-var-lib-calico\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720414 kubelet[2721]: I1212 17:22:43.720338 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-cni-bin-dir\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.720414 kubelet[2721]: I1212 17:22:43.720354 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0-cni-net-dir\") pod \"calico-node-kd6nt\" (UID: \"3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0\") " pod="calico-system/calico-node-kd6nt" Dec 12 17:22:43.801263 kubelet[2721]: E1212 17:22:43.801187 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:43.802030 containerd[1581]: time="2025-12-12T17:22:43.801789904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-758c457c8b-w55g6,Uid:5a7183bd-9c6c-4b83-b404-89bbf58b5e87,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:43.826259 kubelet[2721]: E1212 17:22:43.826216 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:43.826259 kubelet[2721]: W1212 17:22:43.826243 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:43.826395 kubelet[2721]: E1212 17:22:43.826278 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:43.835542 kubelet[2721]: E1212 17:22:43.835458 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:43.835542 kubelet[2721]: W1212 17:22:43.835483 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:43.835542 kubelet[2721]: E1212 17:22:43.835502 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:43.842989 kubelet[2721]: E1212 17:22:43.842959 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:43.842989 kubelet[2721]: W1212 17:22:43.842981 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:43.843133 kubelet[2721]: E1212 17:22:43.842999 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:43.950514 kubelet[2721]: E1212 17:22:43.950134 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:22:43.968060 containerd[1581]: time="2025-12-12T17:22:43.967609726Z" level=info msg="connecting to shim 7154e6b5e1aad5f8ff054f9f97a2b78fce2caf0f628df3d26199c3cf07098037" address="unix:///run/containerd/s/d1216f047627c0fa1c27b3b3c0f9c8697de99427ad2cfcba30aa15a4a3eb3fca" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:44.003288 kubelet[2721]: E1212 17:22:44.003236 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:44.004237 containerd[1581]: time="2025-12-12T17:22:44.004155224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kd6nt,Uid:3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:44.007991 kubelet[2721]: E1212 17:22:44.007962 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.008118 kubelet[2721]: W1212 17:22:44.007998 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.008118 kubelet[2721]: E1212 17:22:44.008022 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.008351 systemd[1]: Started cri-containerd-7154e6b5e1aad5f8ff054f9f97a2b78fce2caf0f628df3d26199c3cf07098037.scope - libcontainer container 7154e6b5e1aad5f8ff054f9f97a2b78fce2caf0f628df3d26199c3cf07098037. Dec 12 17:22:44.008638 kubelet[2721]: E1212 17:22:44.008587 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.008880 kubelet[2721]: W1212 17:22:44.008602 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.008880 kubelet[2721]: E1212 17:22:44.008656 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.009090 kubelet[2721]: E1212 17:22:44.009070 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.009090 kubelet[2721]: W1212 17:22:44.009086 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.009144 kubelet[2721]: E1212 17:22:44.009097 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.009330 kubelet[2721]: E1212 17:22:44.009317 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.009330 kubelet[2721]: W1212 17:22:44.009328 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.009374 kubelet[2721]: E1212 17:22:44.009349 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.009560 kubelet[2721]: E1212 17:22:44.009547 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.009560 kubelet[2721]: W1212 17:22:44.009558 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.009615 kubelet[2721]: E1212 17:22:44.009586 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.009840 kubelet[2721]: E1212 17:22:44.009824 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.009840 kubelet[2721]: W1212 17:22:44.009838 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.009887 kubelet[2721]: E1212 17:22:44.009848 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.010039 kubelet[2721]: E1212 17:22:44.010024 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.010493 kubelet[2721]: W1212 17:22:44.010097 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.010493 kubelet[2721]: E1212 17:22:44.010115 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.010493 kubelet[2721]: E1212 17:22:44.010302 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.010493 kubelet[2721]: W1212 17:22:44.010313 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.010493 kubelet[2721]: E1212 17:22:44.010336 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.010650 kubelet[2721]: E1212 17:22:44.010533 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.010650 kubelet[2721]: W1212 17:22:44.010542 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.010650 kubelet[2721]: E1212 17:22:44.010552 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.010842 kubelet[2721]: E1212 17:22:44.010818 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.010842 kubelet[2721]: W1212 17:22:44.010832 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.010842 kubelet[2721]: E1212 17:22:44.010841 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.011452 kubelet[2721]: E1212 17:22:44.011408 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.011452 kubelet[2721]: W1212 17:22:44.011448 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.011532 kubelet[2721]: E1212 17:22:44.011461 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.012262 kubelet[2721]: E1212 17:22:44.012241 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.012262 kubelet[2721]: W1212 17:22:44.012258 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.012344 kubelet[2721]: E1212 17:22:44.012270 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.012445 kubelet[2721]: E1212 17:22:44.012427 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.012475 kubelet[2721]: W1212 17:22:44.012453 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.012475 kubelet[2721]: E1212 17:22:44.012464 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.012785 kubelet[2721]: E1212 17:22:44.012719 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.012785 kubelet[2721]: W1212 17:22:44.012733 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.012785 kubelet[2721]: E1212 17:22:44.012745 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.013283 kubelet[2721]: E1212 17:22:44.013137 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.013326 kubelet[2721]: W1212 17:22:44.013281 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.013326 kubelet[2721]: E1212 17:22:44.013299 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.014383 kubelet[2721]: E1212 17:22:44.014352 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.014383 kubelet[2721]: W1212 17:22:44.014372 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.014444 kubelet[2721]: E1212 17:22:44.014384 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.014623 kubelet[2721]: E1212 17:22:44.014604 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.014650 kubelet[2721]: W1212 17:22:44.014624 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.014672 kubelet[2721]: E1212 17:22:44.014650 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.015474 kubelet[2721]: E1212 17:22:44.015453 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.015520 kubelet[2721]: W1212 17:22:44.015470 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.015520 kubelet[2721]: E1212 17:22:44.015496 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.015721 kubelet[2721]: E1212 17:22:44.015704 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.016088 kubelet[2721]: W1212 17:22:44.016059 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.016088 kubelet[2721]: E1212 17:22:44.016086 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.016314 kubelet[2721]: E1212 17:22:44.016297 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.016314 kubelet[2721]: W1212 17:22:44.016311 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.016374 kubelet[2721]: E1212 17:22:44.016323 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.022786 kubelet[2721]: E1212 17:22:44.022759 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.022786 kubelet[2721]: W1212 17:22:44.022779 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.023118 kubelet[2721]: E1212 17:22:44.022795 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.023118 kubelet[2721]: I1212 17:22:44.022932 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1e0e30e4-6fdf-475c-9a4a-59287d927d5d-varrun\") pod \"csi-node-driver-c7zbf\" (UID: \"1e0e30e4-6fdf-475c-9a4a-59287d927d5d\") " pod="calico-system/csi-node-driver-c7zbf" Dec 12 17:22:44.023324 kubelet[2721]: E1212 17:22:44.023304 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.023324 kubelet[2721]: W1212 17:22:44.023321 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.023380 kubelet[2721]: E1212 17:22:44.023344 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.023489 kubelet[2721]: I1212 17:22:44.023474 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8stxk\" (UniqueName: \"kubernetes.io/projected/1e0e30e4-6fdf-475c-9a4a-59287d927d5d-kube-api-access-8stxk\") pod \"csi-node-driver-c7zbf\" (UID: \"1e0e30e4-6fdf-475c-9a4a-59287d927d5d\") " pod="calico-system/csi-node-driver-c7zbf" Dec 12 17:22:44.023973 kubelet[2721]: E1212 17:22:44.023749 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.023973 kubelet[2721]: W1212 17:22:44.023768 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.023973 kubelet[2721]: E1212 17:22:44.023807 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.024378 kubelet[2721]: E1212 17:22:44.024249 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.024378 kubelet[2721]: W1212 17:22:44.024267 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.024378 kubelet[2721]: E1212 17:22:44.024283 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.024679 kubelet[2721]: E1212 17:22:44.024662 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.024679 kubelet[2721]: W1212 17:22:44.024677 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.024746 kubelet[2721]: E1212 17:22:44.024695 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.025294 kubelet[2721]: E1212 17:22:44.025274 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.025294 kubelet[2721]: W1212 17:22:44.025292 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.025386 kubelet[2721]: E1212 17:22:44.025335 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.025503 kubelet[2721]: E1212 17:22:44.025486 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.025503 kubelet[2721]: W1212 17:22:44.025500 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.025570 kubelet[2721]: E1212 17:22:44.025511 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.025570 kubelet[2721]: I1212 17:22:44.025535 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e0e30e4-6fdf-475c-9a4a-59287d927d5d-kubelet-dir\") pod \"csi-node-driver-c7zbf\" (UID: \"1e0e30e4-6fdf-475c-9a4a-59287d927d5d\") " pod="calico-system/csi-node-driver-c7zbf" Dec 12 17:22:44.025777 kubelet[2721]: E1212 17:22:44.025752 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.025777 kubelet[2721]: W1212 17:22:44.025769 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.025842 kubelet[2721]: E1212 17:22:44.025798 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.025842 kubelet[2721]: I1212 17:22:44.025817 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1e0e30e4-6fdf-475c-9a4a-59287d927d5d-socket-dir\") pod \"csi-node-driver-c7zbf\" (UID: \"1e0e30e4-6fdf-475c-9a4a-59287d927d5d\") " pod="calico-system/csi-node-driver-c7zbf" Dec 12 17:22:44.026084 kubelet[2721]: E1212 17:22:44.026066 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.026084 kubelet[2721]: W1212 17:22:44.026083 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.026247 kubelet[2721]: E1212 17:22:44.026100 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.026247 kubelet[2721]: I1212 17:22:44.026119 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1e0e30e4-6fdf-475c-9a4a-59287d927d5d-registration-dir\") pod \"csi-node-driver-c7zbf\" (UID: \"1e0e30e4-6fdf-475c-9a4a-59287d927d5d\") " pod="calico-system/csi-node-driver-c7zbf" Dec 12 17:22:44.025000 audit: BPF prog-id=149 op=LOAD Dec 12 17:22:44.026393 kubelet[2721]: E1212 17:22:44.026363 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.026393 kubelet[2721]: W1212 17:22:44.026375 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.026436 kubelet[2721]: E1212 17:22:44.026394 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.025000 audit: BPF prog-id=150 op=LOAD Dec 12 17:22:44.026969 kubelet[2721]: E1212 17:22:44.026950 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.026969 kubelet[2721]: W1212 17:22:44.026968 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.027094 kubelet[2721]: E1212 17:22:44.026986 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.025000 audit[3192]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3177 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353465366235653161616435663866663035346639663937613262 Dec 12 17:22:44.026000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:22:44.026000 audit[3192]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3177 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353465366235653161616435663866663035346639663937613262 Dec 12 17:22:44.027457 kubelet[2721]: E1212 17:22:44.027231 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.027457 kubelet[2721]: W1212 17:22:44.027242 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.027457 kubelet[2721]: E1212 17:22:44.027259 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.027457 kubelet[2721]: E1212 17:22:44.027436 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.027457 kubelet[2721]: W1212 17:22:44.027446 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.027568 kubelet[2721]: E1212 17:22:44.027514 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.027642 kubelet[2721]: E1212 17:22:44.027614 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.027642 kubelet[2721]: W1212 17:22:44.027639 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.027685 kubelet[2721]: E1212 17:22:44.027651 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.027853 kubelet[2721]: E1212 17:22:44.027836 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.027853 kubelet[2721]: W1212 17:22:44.027851 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.027907 kubelet[2721]: E1212 17:22:44.027861 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.026000 audit: BPF prog-id=151 op=LOAD Dec 12 17:22:44.026000 audit[3192]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3177 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.026000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353465366235653161616435663866663035346639663937613262 Dec 12 17:22:44.027000 audit: BPF prog-id=152 op=LOAD Dec 12 17:22:44.027000 audit[3192]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3177 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353465366235653161616435663866663035346639663937613262 Dec 12 17:22:44.027000 audit: BPF prog-id=152 op=UNLOAD Dec 12 17:22:44.027000 audit[3192]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3177 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353465366235653161616435663866663035346639663937613262 Dec 12 17:22:44.027000 audit: BPF prog-id=151 op=UNLOAD Dec 12 17:22:44.027000 audit[3192]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3177 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353465366235653161616435663866663035346639663937613262 Dec 12 17:22:44.027000 audit: BPF prog-id=153 op=LOAD Dec 12 17:22:44.027000 audit[3192]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3177 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353465366235653161616435663866663035346639663937613262 Dec 12 17:22:44.110064 containerd[1581]: time="2025-12-12T17:22:44.109779097Z" level=info msg="connecting to shim b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b" address="unix:///run/containerd/s/2af7180804b08f80d86725b8b1a99a6830937b9aea3eed9733e53bcc018fc249" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:44.117909 containerd[1581]: time="2025-12-12T17:22:44.117863453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-758c457c8b-w55g6,Uid:5a7183bd-9c6c-4b83-b404-89bbf58b5e87,Namespace:calico-system,Attempt:0,} returns sandbox id \"7154e6b5e1aad5f8ff054f9f97a2b78fce2caf0f628df3d26199c3cf07098037\"" Dec 12 17:22:44.121005 kubelet[2721]: E1212 17:22:44.120959 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:44.126132 containerd[1581]: time="2025-12-12T17:22:44.126095461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:22:44.127492 kubelet[2721]: E1212 17:22:44.127469 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.127492 kubelet[2721]: W1212 17:22:44.127488 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.127612 kubelet[2721]: E1212 17:22:44.127507 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.127736 kubelet[2721]: E1212 17:22:44.127721 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.127736 kubelet[2721]: W1212 17:22:44.127733 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.127802 kubelet[2721]: E1212 17:22:44.127749 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.128179 kubelet[2721]: E1212 17:22:44.128158 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.128229 kubelet[2721]: W1212 17:22:44.128179 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.128292 kubelet[2721]: E1212 17:22:44.128238 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.129085 kubelet[2721]: E1212 17:22:44.129069 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.129085 kubelet[2721]: W1212 17:22:44.129084 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.129159 kubelet[2721]: E1212 17:22:44.129102 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.130658 kubelet[2721]: E1212 17:22:44.130383 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.130658 kubelet[2721]: W1212 17:22:44.130510 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.130658 kubelet[2721]: E1212 17:22:44.130567 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.131311 kubelet[2721]: E1212 17:22:44.131294 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.131311 kubelet[2721]: W1212 17:22:44.131308 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.131420 kubelet[2721]: E1212 17:22:44.131400 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.131566 kubelet[2721]: E1212 17:22:44.131550 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.131566 kubelet[2721]: W1212 17:22:44.131563 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.131651 kubelet[2721]: E1212 17:22:44.131586 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.131747 kubelet[2721]: E1212 17:22:44.131731 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.131842 kubelet[2721]: W1212 17:22:44.131747 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.131842 kubelet[2721]: E1212 17:22:44.131782 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.131916 kubelet[2721]: E1212 17:22:44.131903 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.131916 kubelet[2721]: W1212 17:22:44.131912 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.132173 kubelet[2721]: E1212 17:22:44.131960 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.132173 kubelet[2721]: E1212 17:22:44.132061 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.132173 kubelet[2721]: W1212 17:22:44.132070 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.132173 kubelet[2721]: E1212 17:22:44.132105 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.132295 kubelet[2721]: E1212 17:22:44.132276 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.132295 kubelet[2721]: W1212 17:22:44.132289 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.132295 kubelet[2721]: E1212 17:22:44.132306 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.132464 kubelet[2721]: E1212 17:22:44.132451 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.132464 kubelet[2721]: W1212 17:22:44.132462 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.132517 kubelet[2721]: E1212 17:22:44.132475 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.132673 kubelet[2721]: E1212 17:22:44.132661 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.132673 kubelet[2721]: W1212 17:22:44.132673 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.132728 kubelet[2721]: E1212 17:22:44.132686 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.132857 kubelet[2721]: E1212 17:22:44.132843 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.132857 kubelet[2721]: W1212 17:22:44.132855 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.132912 kubelet[2721]: E1212 17:22:44.132868 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.133189 kubelet[2721]: E1212 17:22:44.133173 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.133229 kubelet[2721]: W1212 17:22:44.133189 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.133299 kubelet[2721]: E1212 17:22:44.133279 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.133498 kubelet[2721]: E1212 17:22:44.133479 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.133498 kubelet[2721]: W1212 17:22:44.133494 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.133669 kubelet[2721]: E1212 17:22:44.133543 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.133775 kubelet[2721]: E1212 17:22:44.133701 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.133775 kubelet[2721]: W1212 17:22:44.133714 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.133775 kubelet[2721]: E1212 17:22:44.133741 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.133881 kubelet[2721]: E1212 17:22:44.133864 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.133881 kubelet[2721]: W1212 17:22:44.133875 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.133881 kubelet[2721]: E1212 17:22:44.133904 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.134074 kubelet[2721]: E1212 17:22:44.134058 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.134074 kubelet[2721]: W1212 17:22:44.134071 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.134133 kubelet[2721]: E1212 17:22:44.134085 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.134262 kubelet[2721]: E1212 17:22:44.134247 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.134262 kubelet[2721]: W1212 17:22:44.134259 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.134652 kubelet[2721]: E1212 17:22:44.134273 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.134751 kubelet[2721]: E1212 17:22:44.134734 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.134815 kubelet[2721]: W1212 17:22:44.134802 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.134890 kubelet[2721]: E1212 17:22:44.134879 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.135124 kubelet[2721]: E1212 17:22:44.135106 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.135124 kubelet[2721]: W1212 17:22:44.135119 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.135192 kubelet[2721]: E1212 17:22:44.135138 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.135526 kubelet[2721]: E1212 17:22:44.135506 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.135526 kubelet[2721]: W1212 17:22:44.135523 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.135596 kubelet[2721]: E1212 17:22:44.135540 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.135807 kubelet[2721]: E1212 17:22:44.135792 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.135837 kubelet[2721]: W1212 17:22:44.135808 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.135837 kubelet[2721]: E1212 17:22:44.135823 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.136109 kubelet[2721]: E1212 17:22:44.136042 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.136109 kubelet[2721]: W1212 17:22:44.136053 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.136109 kubelet[2721]: E1212 17:22:44.136063 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.137279 systemd[1]: Started cri-containerd-b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b.scope - libcontainer container b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b. Dec 12 17:22:44.148367 kubelet[2721]: E1212 17:22:44.148336 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:44.148497 kubelet[2721]: W1212 17:22:44.148483 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:44.148738 kubelet[2721]: E1212 17:22:44.148719 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:44.156000 audit: BPF prog-id=154 op=LOAD Dec 12 17:22:44.157000 audit: BPF prog-id=155 op=LOAD Dec 12 17:22:44.157000 audit[3275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235393830393966333461633163343138646438623334373930666562 Dec 12 17:22:44.157000 audit: BPF prog-id=155 op=UNLOAD Dec 12 17:22:44.157000 audit[3275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235393830393966333461633163343138646438623334373930666562 Dec 12 17:22:44.157000 audit: BPF prog-id=156 op=LOAD Dec 12 17:22:44.157000 audit[3275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235393830393966333461633163343138646438623334373930666562 Dec 12 17:22:44.157000 audit: BPF prog-id=157 op=LOAD Dec 12 17:22:44.157000 audit[3275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235393830393966333461633163343138646438623334373930666562 Dec 12 17:22:44.157000 audit: BPF prog-id=157 op=UNLOAD Dec 12 17:22:44.157000 audit[3275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235393830393966333461633163343138646438623334373930666562 Dec 12 17:22:44.157000 audit: BPF prog-id=156 op=UNLOAD Dec 12 17:22:44.157000 audit[3275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235393830393966333461633163343138646438623334373930666562 Dec 12 17:22:44.157000 audit: BPF prog-id=158 op=LOAD Dec 12 17:22:44.157000 audit[3275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3263 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235393830393966333461633163343138646438623334373930666562 Dec 12 17:22:44.197980 containerd[1581]: time="2025-12-12T17:22:44.197939383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kd6nt,Uid:3b1f1dcc-ac3b-40c4-bef6-2c87cb638ca0,Namespace:calico-system,Attempt:0,} returns sandbox id \"b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b\"" Dec 12 17:22:44.199177 kubelet[2721]: E1212 17:22:44.199133 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:44.505000 audit[3328]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:44.505000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc48b3510 a2=0 a3=1 items=0 ppid=2830 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:44.510000 audit[3328]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:44.510000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc48b3510 a2=0 a3=1 items=0 ppid=2830 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:44.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:45.035359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514727015.mount: Deactivated successfully. Dec 12 17:22:45.392699 kubelet[2721]: E1212 17:22:45.392628 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:22:46.033051 containerd[1581]: time="2025-12-12T17:22:46.032973918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 12 17:22:46.036492 containerd[1581]: time="2025-12-12T17:22:46.036434158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.910292292s" Dec 12 17:22:46.036492 containerd[1581]: time="2025-12-12T17:22:46.036484002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:22:46.038257 containerd[1581]: time="2025-12-12T17:22:46.038078771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:22:46.051243 containerd[1581]: time="2025-12-12T17:22:46.051173310Z" level=info msg="CreateContainer within sandbox \"7154e6b5e1aad5f8ff054f9f97a2b78fce2caf0f628df3d26199c3cf07098037\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:22:46.058953 containerd[1581]: time="2025-12-12T17:22:46.058148754Z" level=info msg="Container 570731c0819b43223d4ca973dba443062a231cff3b8c1038642118272af025e4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:46.065568 containerd[1581]: time="2025-12-12T17:22:46.065502789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:46.065862 containerd[1581]: time="2025-12-12T17:22:46.065808374Z" level=info msg="CreateContainer within sandbox \"7154e6b5e1aad5f8ff054f9f97a2b78fce2caf0f628df3d26199c3cf07098037\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"570731c0819b43223d4ca973dba443062a231cff3b8c1038642118272af025e4\"" Dec 12 17:22:46.066206 containerd[1581]: time="2025-12-12T17:22:46.066176723Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:46.066567 containerd[1581]: time="2025-12-12T17:22:46.066537673Z" level=info msg="StartContainer for \"570731c0819b43223d4ca973dba443062a231cff3b8c1038642118272af025e4\"" Dec 12 17:22:46.067055 containerd[1581]: time="2025-12-12T17:22:46.066985389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:46.067831 containerd[1581]: time="2025-12-12T17:22:46.067801735Z" level=info msg="connecting to shim 570731c0819b43223d4ca973dba443062a231cff3b8c1038642118272af025e4" address="unix:///run/containerd/s/d1216f047627c0fa1c27b3b3c0f9c8697de99427ad2cfcba30aa15a4a3eb3fca" protocol=ttrpc version=3 Dec 12 17:22:46.095305 systemd[1]: Started cri-containerd-570731c0819b43223d4ca973dba443062a231cff3b8c1038642118272af025e4.scope - libcontainer container 570731c0819b43223d4ca973dba443062a231cff3b8c1038642118272af025e4. Dec 12 17:22:46.113000 audit: BPF prog-id=159 op=LOAD Dec 12 17:22:46.115000 audit: BPF prog-id=160 op=LOAD Dec 12 17:22:46.115000 audit[3339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3177 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:46.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537303733316330383139623433323233643463613937336462613434 Dec 12 17:22:46.115000 audit: BPF prog-id=160 op=UNLOAD Dec 12 17:22:46.115000 audit[3339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3177 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:46.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537303733316330383139623433323233643463613937336462613434 Dec 12 17:22:46.115000 audit: BPF prog-id=161 op=LOAD Dec 12 17:22:46.115000 audit[3339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3177 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:46.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537303733316330383139623433323233643463613937336462613434 Dec 12 17:22:46.115000 audit: BPF prog-id=162 op=LOAD Dec 12 17:22:46.115000 audit[3339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3177 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:46.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537303733316330383139623433323233643463613937336462613434 Dec 12 17:22:46.116000 audit: BPF prog-id=162 op=UNLOAD Dec 12 17:22:46.116000 audit[3339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3177 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:46.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537303733316330383139623433323233643463613937336462613434 Dec 12 17:22:46.116000 audit: BPF prog-id=161 op=UNLOAD Dec 12 17:22:46.116000 audit[3339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3177 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:46.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537303733316330383139623433323233643463613937336462613434 Dec 12 17:22:46.116000 audit: BPF prog-id=163 op=LOAD Dec 12 17:22:46.116000 audit[3339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3177 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:46.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537303733316330383139623433323233643463613937336462613434 Dec 12 17:22:46.141540 containerd[1581]: time="2025-12-12T17:22:46.141500856Z" level=info msg="StartContainer for \"570731c0819b43223d4ca973dba443062a231cff3b8c1038642118272af025e4\" returns successfully" Dec 12 17:22:46.489926 kubelet[2721]: E1212 17:22:46.489573 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:46.510784 kubelet[2721]: I1212 17:22:46.510718 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-758c457c8b-w55g6" podStartSLOduration=1.594940792 podStartE2EDuration="3.51069784s" podCreationTimestamp="2025-12-12 17:22:43 +0000 UTC" firstStartedPulling="2025-12-12 17:22:44.121601704 +0000 UTC m=+22.832405962" lastFinishedPulling="2025-12-12 17:22:46.037358792 +0000 UTC m=+24.748163010" observedRunningTime="2025-12-12 17:22:46.510406777 +0000 UTC m=+25.221211035" watchObservedRunningTime="2025-12-12 17:22:46.51069784 +0000 UTC m=+25.221502098" Dec 12 17:22:46.535429 kubelet[2721]: E1212 17:22:46.535387 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.535667 kubelet[2721]: W1212 17:22:46.535547 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.535667 kubelet[2721]: E1212 17:22:46.535574 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.535838 kubelet[2721]: E1212 17:22:46.535816 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.535901 kubelet[2721]: W1212 17:22:46.535888 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.535950 kubelet[2721]: E1212 17:22:46.535941 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.536293 kubelet[2721]: E1212 17:22:46.536181 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.536293 kubelet[2721]: W1212 17:22:46.536195 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.536293 kubelet[2721]: E1212 17:22:46.536206 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.536475 kubelet[2721]: E1212 17:22:46.536461 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.536529 kubelet[2721]: W1212 17:22:46.536518 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.536580 kubelet[2721]: E1212 17:22:46.536571 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.536812 kubelet[2721]: E1212 17:22:46.536798 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.536896 kubelet[2721]: W1212 17:22:46.536883 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.536953 kubelet[2721]: E1212 17:22:46.536942 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.537268 kubelet[2721]: E1212 17:22:46.537167 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.537268 kubelet[2721]: W1212 17:22:46.537180 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.537268 kubelet[2721]: E1212 17:22:46.537191 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.537408 kubelet[2721]: E1212 17:22:46.537396 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.537471 kubelet[2721]: W1212 17:22:46.537459 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.537521 kubelet[2721]: E1212 17:22:46.537512 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.537743 kubelet[2721]: E1212 17:22:46.537730 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.537955 kubelet[2721]: W1212 17:22:46.537809 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.537955 kubelet[2721]: E1212 17:22:46.537835 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.538146 kubelet[2721]: E1212 17:22:46.538133 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.538212 kubelet[2721]: W1212 17:22:46.538200 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.538268 kubelet[2721]: E1212 17:22:46.538259 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.538487 kubelet[2721]: E1212 17:22:46.538474 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.538555 kubelet[2721]: W1212 17:22:46.538544 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.538607 kubelet[2721]: E1212 17:22:46.538597 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.538945 kubelet[2721]: E1212 17:22:46.538837 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.538945 kubelet[2721]: W1212 17:22:46.538850 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.538945 kubelet[2721]: E1212 17:22:46.538861 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.539142 kubelet[2721]: E1212 17:22:46.539130 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.539197 kubelet[2721]: W1212 17:22:46.539187 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.539244 kubelet[2721]: E1212 17:22:46.539235 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.539567 kubelet[2721]: E1212 17:22:46.539459 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.539567 kubelet[2721]: W1212 17:22:46.539482 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.539567 kubelet[2721]: E1212 17:22:46.539491 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.539725 kubelet[2721]: E1212 17:22:46.539713 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.539777 kubelet[2721]: W1212 17:22:46.539768 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.539854 kubelet[2721]: E1212 17:22:46.539840 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.540108 kubelet[2721]: E1212 17:22:46.540093 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.540255 kubelet[2721]: W1212 17:22:46.540176 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.540255 kubelet[2721]: E1212 17:22:46.540193 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.549508 kubelet[2721]: E1212 17:22:46.549483 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.549508 kubelet[2721]: W1212 17:22:46.549506 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.549640 kubelet[2721]: E1212 17:22:46.549525 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.549767 kubelet[2721]: E1212 17:22:46.549754 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.549767 kubelet[2721]: W1212 17:22:46.549766 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.549816 kubelet[2721]: E1212 17:22:46.549783 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.550004 kubelet[2721]: E1212 17:22:46.549986 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.550051 kubelet[2721]: W1212 17:22:46.550006 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.550075 kubelet[2721]: E1212 17:22:46.550027 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.550281 kubelet[2721]: E1212 17:22:46.550269 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.550311 kubelet[2721]: W1212 17:22:46.550282 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.550311 kubelet[2721]: E1212 17:22:46.550297 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.550450 kubelet[2721]: E1212 17:22:46.550440 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.550450 kubelet[2721]: W1212 17:22:46.550450 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.550500 kubelet[2721]: E1212 17:22:46.550464 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.550637 kubelet[2721]: E1212 17:22:46.550626 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.550668 kubelet[2721]: W1212 17:22:46.550638 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.550668 kubelet[2721]: E1212 17:22:46.550651 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.550903 kubelet[2721]: E1212 17:22:46.550888 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.550937 kubelet[2721]: W1212 17:22:46.550904 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.550937 kubelet[2721]: E1212 17:22:46.550921 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.551135 kubelet[2721]: E1212 17:22:46.551124 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.551135 kubelet[2721]: W1212 17:22:46.551135 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.551201 kubelet[2721]: E1212 17:22:46.551161 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.551283 kubelet[2721]: E1212 17:22:46.551272 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.551320 kubelet[2721]: W1212 17:22:46.551283 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.551320 kubelet[2721]: E1212 17:22:46.551306 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.551426 kubelet[2721]: E1212 17:22:46.551416 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.551458 kubelet[2721]: W1212 17:22:46.551426 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.551458 kubelet[2721]: E1212 17:22:46.551439 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.551970 kubelet[2721]: E1212 17:22:46.551955 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.552008 kubelet[2721]: W1212 17:22:46.551971 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.552008 kubelet[2721]: E1212 17:22:46.551989 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.552155 kubelet[2721]: E1212 17:22:46.552140 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.552155 kubelet[2721]: W1212 17:22:46.552152 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.552207 kubelet[2721]: E1212 17:22:46.552166 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.552426 kubelet[2721]: E1212 17:22:46.552371 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.552462 kubelet[2721]: W1212 17:22:46.552427 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.552462 kubelet[2721]: E1212 17:22:46.552445 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.552798 kubelet[2721]: E1212 17:22:46.552694 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.552798 kubelet[2721]: W1212 17:22:46.552710 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.552798 kubelet[2721]: E1212 17:22:46.552722 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.553923 kubelet[2721]: E1212 17:22:46.553757 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.553923 kubelet[2721]: W1212 17:22:46.553773 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.553923 kubelet[2721]: E1212 17:22:46.553787 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.554103 kubelet[2721]: E1212 17:22:46.554090 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.554155 kubelet[2721]: W1212 17:22:46.554145 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.554204 kubelet[2721]: E1212 17:22:46.554194 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.554610 kubelet[2721]: E1212 17:22:46.554594 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.554679 kubelet[2721]: W1212 17:22:46.554666 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.554729 kubelet[2721]: E1212 17:22:46.554719 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:46.555920 kubelet[2721]: E1212 17:22:46.555899 2721 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:22:46.555920 kubelet[2721]: W1212 17:22:46.555918 2721 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:22:46.555995 kubelet[2721]: E1212 17:22:46.555934 2721 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:22:47.137192 containerd[1581]: time="2025-12-12T17:22:47.137148840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:47.140525 containerd[1581]: time="2025-12-12T17:22:47.140463297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Dec 12 17:22:47.141299 containerd[1581]: time="2025-12-12T17:22:47.141234476Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:47.145251 containerd[1581]: time="2025-12-12T17:22:47.145142659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:47.145743 containerd[1581]: time="2025-12-12T17:22:47.145702902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.10759289s" Dec 12 17:22:47.145743 containerd[1581]: time="2025-12-12T17:22:47.145738945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:22:47.148468 containerd[1581]: time="2025-12-12T17:22:47.148431233Z" level=info msg="CreateContainer within sandbox \"b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:22:47.163080 containerd[1581]: time="2025-12-12T17:22:47.161997564Z" level=info msg="Container 9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:47.172758 containerd[1581]: time="2025-12-12T17:22:47.172514578Z" level=info msg="CreateContainer within sandbox \"b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4\"" Dec 12 17:22:47.173887 containerd[1581]: time="2025-12-12T17:22:47.173350162Z" level=info msg="StartContainer for \"9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4\"" Dec 12 17:22:47.174901 containerd[1581]: time="2025-12-12T17:22:47.174819556Z" level=info msg="connecting to shim 9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4" address="unix:///run/containerd/s/2af7180804b08f80d86725b8b1a99a6830937b9aea3eed9733e53bcc018fc249" protocol=ttrpc version=3 Dec 12 17:22:47.198300 systemd[1]: Started cri-containerd-9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4.scope - libcontainer container 9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4. Dec 12 17:22:47.252000 audit: BPF prog-id=164 op=LOAD Dec 12 17:22:47.252000 audit[3414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=3263 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:47.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333462323165323363346131343561363362656336646463363261 Dec 12 17:22:47.252000 audit: BPF prog-id=165 op=LOAD Dec 12 17:22:47.252000 audit[3414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=3263 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:47.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333462323165323363346131343561363362656336646463363261 Dec 12 17:22:47.252000 audit: BPF prog-id=165 op=UNLOAD Dec 12 17:22:47.252000 audit[3414]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:47.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333462323165323363346131343561363362656336646463363261 Dec 12 17:22:47.252000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:22:47.252000 audit[3414]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:47.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333462323165323363346131343561363362656336646463363261 Dec 12 17:22:47.252000 audit: BPF prog-id=166 op=LOAD Dec 12 17:22:47.252000 audit[3414]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=3263 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:47.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333462323165323363346131343561363362656336646463363261 Dec 12 17:22:47.312013 containerd[1581]: time="2025-12-12T17:22:47.311353566Z" level=info msg="StartContainer for \"9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4\" returns successfully" Dec 12 17:22:47.324310 systemd[1]: cri-containerd-9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4.scope: Deactivated successfully. Dec 12 17:22:47.332000 audit: BPF prog-id=166 op=UNLOAD Dec 12 17:22:47.350418 containerd[1581]: time="2025-12-12T17:22:47.350348984Z" level=info msg="received container exit event container_id:\"9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4\" id:\"9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4\" pid:3426 exited_at:{seconds:1765560167 nanos:342787599}" Dec 12 17:22:47.385751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9334b21e23c4a145a63bec6ddc62aab8de93247b8c3a1c5bb44f61ef3a7018a4-rootfs.mount: Deactivated successfully. Dec 12 17:22:47.389077 kubelet[2721]: E1212 17:22:47.388935 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:22:47.493291 kubelet[2721]: I1212 17:22:47.493249 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:22:47.493925 kubelet[2721]: E1212 17:22:47.493613 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:47.493965 kubelet[2721]: E1212 17:22:47.493946 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:48.498730 kubelet[2721]: E1212 17:22:48.498683 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:48.502189 containerd[1581]: time="2025-12-12T17:22:48.502133235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:22:49.392804 kubelet[2721]: E1212 17:22:49.389138 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:22:50.236020 containerd[1581]: time="2025-12-12T17:22:50.235972306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:50.237204 containerd[1581]: time="2025-12-12T17:22:50.237151187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:50.238015 containerd[1581]: time="2025-12-12T17:22:50.237991684Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:50.240075 containerd[1581]: time="2025-12-12T17:22:50.240026183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:50.240895 containerd[1581]: time="2025-12-12T17:22:50.240713350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 1.738530151s" Dec 12 17:22:50.240895 containerd[1581]: time="2025-12-12T17:22:50.240745232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:22:50.247136 containerd[1581]: time="2025-12-12T17:22:50.245963388Z" level=info msg="CreateContainer within sandbox \"b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:22:50.256671 containerd[1581]: time="2025-12-12T17:22:50.255557123Z" level=info msg="Container 8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:50.264373 containerd[1581]: time="2025-12-12T17:22:50.264319201Z" level=info msg="CreateContainer within sandbox \"b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a\"" Dec 12 17:22:50.264889 containerd[1581]: time="2025-12-12T17:22:50.264840396Z" level=info msg="StartContainer for \"8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a\"" Dec 12 17:22:50.266410 containerd[1581]: time="2025-12-12T17:22:50.266380941Z" level=info msg="connecting to shim 8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a" address="unix:///run/containerd/s/2af7180804b08f80d86725b8b1a99a6830937b9aea3eed9733e53bcc018fc249" protocol=ttrpc version=3 Dec 12 17:22:50.291302 systemd[1]: Started cri-containerd-8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a.scope - libcontainer container 8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a. Dec 12 17:22:50.368000 audit: BPF prog-id=167 op=LOAD Dec 12 17:22:50.375240 kernel: kauditd_printk_skb: 90 callbacks suppressed Dec 12 17:22:50.375353 kernel: audit: type=1334 audit(1765560170.368:553): prog-id=167 op=LOAD Dec 12 17:22:50.375876 kernel: audit: type=1300 audit(1765560170.368:553): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3263 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:50.368000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3263 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:50.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863363737373763333037353462616266393538316437383137613065 Dec 12 17:22:50.388277 kernel: audit: type=1327 audit(1765560170.368:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863363737373763333037353462616266393538316437383137613065 Dec 12 17:22:50.369000 audit: BPF prog-id=168 op=LOAD Dec 12 17:22:50.389214 kernel: audit: type=1334 audit(1765560170.369:554): prog-id=168 op=LOAD Dec 12 17:22:50.369000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3263 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:50.396012 kernel: audit: type=1300 audit(1765560170.369:554): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3263 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:50.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863363737373763333037353462616266393538316437383137613065 Dec 12 17:22:50.399610 kernel: audit: type=1327 audit(1765560170.369:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863363737373763333037353462616266393538316437383137613065 Dec 12 17:22:50.399708 kernel: audit: type=1334 audit(1765560170.374:555): prog-id=168 op=UNLOAD Dec 12 17:22:50.374000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:22:50.374000 audit[3475]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:50.403400 kernel: audit: type=1300 audit(1765560170.374:555): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:50.404136 kernel: audit: type=1327 audit(1765560170.374:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863363737373763333037353462616266393538316437383137613065 Dec 12 17:22:50.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863363737373763333037353462616266393538316437383137613065 Dec 12 17:22:50.374000 audit: BPF prog-id=167 op=UNLOAD Dec 12 17:22:50.374000 audit[3475]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:50.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863363737373763333037353462616266393538316437383137613065 Dec 12 17:22:50.374000 audit: BPF prog-id=169 op=LOAD Dec 12 17:22:50.374000 audit[3475]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3263 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:50.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863363737373763333037353462616266393538316437383137613065 Dec 12 17:22:50.407063 kernel: audit: type=1334 audit(1765560170.374:556): prog-id=167 op=UNLOAD Dec 12 17:22:50.418161 containerd[1581]: time="2025-12-12T17:22:50.418116775Z" level=info msg="StartContainer for \"8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a\" returns successfully" Dec 12 17:22:50.515018 kubelet[2721]: E1212 17:22:50.514927 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:51.040179 systemd[1]: cri-containerd-8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a.scope: Deactivated successfully. Dec 12 17:22:51.040619 systemd[1]: cri-containerd-8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a.scope: Consumed 461ms CPU time, 179.8M memory peak, 2.9M read from disk, 165.9M written to disk. Dec 12 17:22:51.042745 containerd[1581]: time="2025-12-12T17:22:51.042693682Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:22:51.043988 containerd[1581]: time="2025-12-12T17:22:51.043954325Z" level=info msg="received container exit event container_id:\"8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a\" id:\"8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a\" pid:3488 exited_at:{seconds:1765560171 nanos:43603262}" Dec 12 17:22:51.043000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:22:51.068903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c67777c30754babf9581d7817a0e38b8253eb50e636cd9aa968cb5d3f6e018a-rootfs.mount: Deactivated successfully. Dec 12 17:22:51.145759 kubelet[2721]: I1212 17:22:51.145727 2721 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:22:51.239283 systemd[1]: Created slice kubepods-burstable-pod1fc307bf_6f57_4071_b244_d3e2353cae78.slice - libcontainer container kubepods-burstable-pod1fc307bf_6f57_4071_b244_d3e2353cae78.slice. Dec 12 17:22:51.247975 systemd[1]: Created slice kubepods-burstable-pod8c46f9b9_021e_4630_9626_c64d156571c2.slice - libcontainer container kubepods-burstable-pod8c46f9b9_021e_4630_9626_c64d156571c2.slice. Dec 12 17:22:51.254664 systemd[1]: Created slice kubepods-besteffort-pod84025a67_a460_4acb_8e7e_d73c2b743a45.slice - libcontainer container kubepods-besteffort-pod84025a67_a460_4acb_8e7e_d73c2b743a45.slice. Dec 12 17:22:51.263582 systemd[1]: Created slice kubepods-besteffort-poda3f97266_0eeb_48be_82de_a37522413454.slice - libcontainer container kubepods-besteffort-poda3f97266_0eeb_48be_82de_a37522413454.slice. Dec 12 17:22:51.272085 systemd[1]: Created slice kubepods-besteffort-pod2fb0c1c5_10e8_4dec_a57c_dc20c81a6882.slice - libcontainer container kubepods-besteffort-pod2fb0c1c5_10e8_4dec_a57c_dc20c81a6882.slice. Dec 12 17:22:51.280159 systemd[1]: Created slice kubepods-besteffort-pod814ad701_33db_45b4_b87d_3357a1210c6d.slice - libcontainer container kubepods-besteffort-pod814ad701_33db_45b4_b87d_3357a1210c6d.slice. Dec 12 17:22:51.286112 systemd[1]: Created slice kubepods-besteffort-pod2e84fb14_216b_4920_bea5_4db1089bfe0c.slice - libcontainer container kubepods-besteffort-pod2e84fb14_216b_4920_bea5_4db1089bfe0c.slice. Dec 12 17:22:51.287943 kubelet[2721]: I1212 17:22:51.287911 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhrc5\" (UniqueName: \"kubernetes.io/projected/2fb0c1c5-10e8-4dec-a57c-dc20c81a6882-kube-api-access-jhrc5\") pod \"calico-kube-controllers-9d446d548-w7qwm\" (UID: \"2fb0c1c5-10e8-4dec-a57c-dc20c81a6882\") " pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" Dec 12 17:22:51.287943 kubelet[2721]: I1212 17:22:51.287947 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fb0c1c5-10e8-4dec-a57c-dc20c81a6882-tigera-ca-bundle\") pod \"calico-kube-controllers-9d446d548-w7qwm\" (UID: \"2fb0c1c5-10e8-4dec-a57c-dc20c81a6882\") " pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" Dec 12 17:22:51.288144 kubelet[2721]: I1212 17:22:51.287969 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f97266-0eeb-48be-82de-a37522413454-whisker-ca-bundle\") pod \"whisker-7697776645-lwmh9\" (UID: \"a3f97266-0eeb-48be-82de-a37522413454\") " pod="calico-system/whisker-7697776645-lwmh9" Dec 12 17:22:51.288144 kubelet[2721]: I1212 17:22:51.287985 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrwq\" (UniqueName: \"kubernetes.io/projected/a3f97266-0eeb-48be-82de-a37522413454-kube-api-access-mwrwq\") pod \"whisker-7697776645-lwmh9\" (UID: \"a3f97266-0eeb-48be-82de-a37522413454\") " pod="calico-system/whisker-7697776645-lwmh9" Dec 12 17:22:51.288144 kubelet[2721]: I1212 17:22:51.288008 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2e84fb14-216b-4920-bea5-4db1089bfe0c-goldmane-key-pair\") pod \"goldmane-666569f655-47rgc\" (UID: \"2e84fb14-216b-4920-bea5-4db1089bfe0c\") " pod="calico-system/goldmane-666569f655-47rgc" Dec 12 17:22:51.288144 kubelet[2721]: I1212 17:22:51.288026 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a3f97266-0eeb-48be-82de-a37522413454-whisker-backend-key-pair\") pod \"whisker-7697776645-lwmh9\" (UID: \"a3f97266-0eeb-48be-82de-a37522413454\") " pod="calico-system/whisker-7697776645-lwmh9" Dec 12 17:22:51.288144 kubelet[2721]: I1212 17:22:51.288071 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp8zz\" (UniqueName: \"kubernetes.io/projected/2e84fb14-216b-4920-bea5-4db1089bfe0c-kube-api-access-gp8zz\") pod \"goldmane-666569f655-47rgc\" (UID: \"2e84fb14-216b-4920-bea5-4db1089bfe0c\") " pod="calico-system/goldmane-666569f655-47rgc" Dec 12 17:22:51.288273 kubelet[2721]: I1212 17:22:51.288090 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vtf2\" (UniqueName: \"kubernetes.io/projected/814ad701-33db-45b4-b87d-3357a1210c6d-kube-api-access-5vtf2\") pod \"calico-apiserver-8546bdfd97-vmgm5\" (UID: \"814ad701-33db-45b4-b87d-3357a1210c6d\") " pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" Dec 12 17:22:51.288273 kubelet[2721]: I1212 17:22:51.288106 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84025a67-a460-4acb-8e7e-d73c2b743a45-calico-apiserver-certs\") pod \"calico-apiserver-8546bdfd97-9gwkk\" (UID: \"84025a67-a460-4acb-8e7e-d73c2b743a45\") " pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" Dec 12 17:22:51.288273 kubelet[2721]: I1212 17:22:51.288121 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c46f9b9-021e-4630-9626-c64d156571c2-config-volume\") pod \"coredns-668d6bf9bc-m45nr\" (UID: \"8c46f9b9-021e-4630-9626-c64d156571c2\") " pod="kube-system/coredns-668d6bf9bc-m45nr" Dec 12 17:22:51.288273 kubelet[2721]: I1212 17:22:51.288140 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/814ad701-33db-45b4-b87d-3357a1210c6d-calico-apiserver-certs\") pod \"calico-apiserver-8546bdfd97-vmgm5\" (UID: \"814ad701-33db-45b4-b87d-3357a1210c6d\") " pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" Dec 12 17:22:51.288273 kubelet[2721]: I1212 17:22:51.288166 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2fth\" (UniqueName: \"kubernetes.io/projected/84025a67-a460-4acb-8e7e-d73c2b743a45-kube-api-access-g2fth\") pod \"calico-apiserver-8546bdfd97-9gwkk\" (UID: \"84025a67-a460-4acb-8e7e-d73c2b743a45\") " pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" Dec 12 17:22:51.288398 kubelet[2721]: I1212 17:22:51.288185 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fc307bf-6f57-4071-b244-d3e2353cae78-config-volume\") pod \"coredns-668d6bf9bc-fvnjr\" (UID: \"1fc307bf-6f57-4071-b244-d3e2353cae78\") " pod="kube-system/coredns-668d6bf9bc-fvnjr" Dec 12 17:22:51.288398 kubelet[2721]: I1212 17:22:51.288202 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nqt9\" (UniqueName: \"kubernetes.io/projected/1fc307bf-6f57-4071-b244-d3e2353cae78-kube-api-access-7nqt9\") pod \"coredns-668d6bf9bc-fvnjr\" (UID: \"1fc307bf-6f57-4071-b244-d3e2353cae78\") " pod="kube-system/coredns-668d6bf9bc-fvnjr" Dec 12 17:22:51.288398 kubelet[2721]: I1212 17:22:51.288219 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlrf\" (UniqueName: \"kubernetes.io/projected/8c46f9b9-021e-4630-9626-c64d156571c2-kube-api-access-qtlrf\") pod \"coredns-668d6bf9bc-m45nr\" (UID: \"8c46f9b9-021e-4630-9626-c64d156571c2\") " pod="kube-system/coredns-668d6bf9bc-m45nr" Dec 12 17:22:51.288398 kubelet[2721]: I1212 17:22:51.288237 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e84fb14-216b-4920-bea5-4db1089bfe0c-config\") pod \"goldmane-666569f655-47rgc\" (UID: \"2e84fb14-216b-4920-bea5-4db1089bfe0c\") " pod="calico-system/goldmane-666569f655-47rgc" Dec 12 17:22:51.288398 kubelet[2721]: I1212 17:22:51.288252 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e84fb14-216b-4920-bea5-4db1089bfe0c-goldmane-ca-bundle\") pod \"goldmane-666569f655-47rgc\" (UID: \"2e84fb14-216b-4920-bea5-4db1089bfe0c\") " pod="calico-system/goldmane-666569f655-47rgc" Dec 12 17:22:51.429540 systemd[1]: Created slice kubepods-besteffort-pod1e0e30e4_6fdf_475c_9a4a_59287d927d5d.slice - libcontainer container kubepods-besteffort-pod1e0e30e4_6fdf_475c_9a4a_59287d927d5d.slice. Dec 12 17:22:51.431820 containerd[1581]: time="2025-12-12T17:22:51.431782267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7zbf,Uid:1e0e30e4-6fdf-475c-9a4a-59287d927d5d,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:51.520975 kubelet[2721]: E1212 17:22:51.520937 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:51.526958 containerd[1581]: time="2025-12-12T17:22:51.526902542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:22:51.543822 kubelet[2721]: E1212 17:22:51.543207 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:51.544514 containerd[1581]: time="2025-12-12T17:22:51.544462294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvnjr,Uid:1fc307bf-6f57-4071-b244-d3e2353cae78,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:51.552595 kubelet[2721]: E1212 17:22:51.552544 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:51.553764 containerd[1581]: time="2025-12-12T17:22:51.553718060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m45nr,Uid:8c46f9b9-021e-4630-9626-c64d156571c2,Namespace:kube-system,Attempt:0,}" Dec 12 17:22:51.555935 containerd[1581]: time="2025-12-12T17:22:51.555868601Z" level=error msg="Failed to destroy network for sandbox \"9d68f79f8ec895aa33386f3ebb7bb78901fbb1eed7e4dde19b7b6906ed98200a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.558178 containerd[1581]: time="2025-12-12T17:22:51.558125749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8546bdfd97-9gwkk,Uid:84025a67-a460-4acb-8e7e-d73c2b743a45,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:22:51.561016 containerd[1581]: time="2025-12-12T17:22:51.560880210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7zbf,Uid:1e0e30e4-6fdf-475c-9a4a-59287d927d5d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d68f79f8ec895aa33386f3ebb7bb78901fbb1eed7e4dde19b7b6906ed98200a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.566416 kubelet[2721]: E1212 17:22:51.566351 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d68f79f8ec895aa33386f3ebb7bb78901fbb1eed7e4dde19b7b6906ed98200a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.569743 kubelet[2721]: E1212 17:22:51.569696 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d68f79f8ec895aa33386f3ebb7bb78901fbb1eed7e4dde19b7b6906ed98200a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7zbf" Dec 12 17:22:51.569854 kubelet[2721]: E1212 17:22:51.569757 2721 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d68f79f8ec895aa33386f3ebb7bb78901fbb1eed7e4dde19b7b6906ed98200a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7zbf" Dec 12 17:22:51.569854 kubelet[2721]: E1212 17:22:51.569811 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7zbf_calico-system(1e0e30e4-6fdf-475c-9a4a-59287d927d5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7zbf_calico-system(1e0e30e4-6fdf-475c-9a4a-59287d927d5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d68f79f8ec895aa33386f3ebb7bb78901fbb1eed7e4dde19b7b6906ed98200a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:22:51.570413 containerd[1581]: time="2025-12-12T17:22:51.570377792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7697776645-lwmh9,Uid:a3f97266-0eeb-48be-82de-a37522413454,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:51.577718 containerd[1581]: time="2025-12-12T17:22:51.577659950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d446d548-w7qwm,Uid:2fb0c1c5-10e8-4dec-a57c-dc20c81a6882,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:51.584968 containerd[1581]: time="2025-12-12T17:22:51.584918465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8546bdfd97-vmgm5,Uid:814ad701-33db-45b4-b87d-3357a1210c6d,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:22:51.590656 containerd[1581]: time="2025-12-12T17:22:51.590601238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-47rgc,Uid:2e84fb14-216b-4920-bea5-4db1089bfe0c,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:51.658601 containerd[1581]: time="2025-12-12T17:22:51.658534891Z" level=error msg="Failed to destroy network for sandbox \"7b7b3c14498eb51f96d8d463a1106b6773ba4473e7eb636cccf7a68e7b35a4bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.661540 containerd[1581]: time="2025-12-12T17:22:51.661471804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvnjr,Uid:1fc307bf-6f57-4071-b244-d3e2353cae78,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b7b3c14498eb51f96d8d463a1106b6773ba4473e7eb636cccf7a68e7b35a4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.661866 kubelet[2721]: E1212 17:22:51.661828 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b7b3c14498eb51f96d8d463a1106b6773ba4473e7eb636cccf7a68e7b35a4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.662369 kubelet[2721]: E1212 17:22:51.661996 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b7b3c14498eb51f96d8d463a1106b6773ba4473e7eb636cccf7a68e7b35a4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvnjr" Dec 12 17:22:51.662369 kubelet[2721]: E1212 17:22:51.662082 2721 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b7b3c14498eb51f96d8d463a1106b6773ba4473e7eb636cccf7a68e7b35a4bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fvnjr" Dec 12 17:22:51.662369 kubelet[2721]: E1212 17:22:51.662137 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fvnjr_kube-system(1fc307bf-6f57-4071-b244-d3e2353cae78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fvnjr_kube-system(1fc307bf-6f57-4071-b244-d3e2353cae78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b7b3c14498eb51f96d8d463a1106b6773ba4473e7eb636cccf7a68e7b35a4bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fvnjr" podUID="1fc307bf-6f57-4071-b244-d3e2353cae78" Dec 12 17:22:51.677711 containerd[1581]: time="2025-12-12T17:22:51.677642584Z" level=error msg="Failed to destroy network for sandbox \"361fa70d9ac5009675c16243361f9a31c88c0c639f391fe3cf71c7544ee67adf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.679191 containerd[1581]: time="2025-12-12T17:22:51.679141602Z" level=error msg="Failed to destroy network for sandbox \"bdca0e31e3536411b9af344d5e6ff43dc7dc90698d2f9d4567ab430bda3d584b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.682459 containerd[1581]: time="2025-12-12T17:22:51.682285288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8546bdfd97-9gwkk,Uid:84025a67-a460-4acb-8e7e-d73c2b743a45,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdca0e31e3536411b9af344d5e6ff43dc7dc90698d2f9d4567ab430bda3d584b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.682624 kubelet[2721]: E1212 17:22:51.682542 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdca0e31e3536411b9af344d5e6ff43dc7dc90698d2f9d4567ab430bda3d584b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.682624 kubelet[2721]: E1212 17:22:51.682602 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdca0e31e3536411b9af344d5e6ff43dc7dc90698d2f9d4567ab430bda3d584b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" Dec 12 17:22:51.682825 kubelet[2721]: E1212 17:22:51.682625 2721 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdca0e31e3536411b9af344d5e6ff43dc7dc90698d2f9d4567ab430bda3d584b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" Dec 12 17:22:51.682825 kubelet[2721]: E1212 17:22:51.682703 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8546bdfd97-9gwkk_calico-apiserver(84025a67-a460-4acb-8e7e-d73c2b743a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8546bdfd97-9gwkk_calico-apiserver(84025a67-a460-4acb-8e7e-d73c2b743a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdca0e31e3536411b9af344d5e6ff43dc7dc90698d2f9d4567ab430bda3d584b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" podUID="84025a67-a460-4acb-8e7e-d73c2b743a45" Dec 12 17:22:51.684043 containerd[1581]: time="2025-12-12T17:22:51.683958958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m45nr,Uid:8c46f9b9-021e-4630-9626-c64d156571c2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"361fa70d9ac5009675c16243361f9a31c88c0c639f391fe3cf71c7544ee67adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.684296 kubelet[2721]: E1212 17:22:51.684265 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361fa70d9ac5009675c16243361f9a31c88c0c639f391fe3cf71c7544ee67adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.684407 kubelet[2721]: E1212 17:22:51.684388 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361fa70d9ac5009675c16243361f9a31c88c0c639f391fe3cf71c7544ee67adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m45nr" Dec 12 17:22:51.684502 kubelet[2721]: E1212 17:22:51.684484 2721 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"361fa70d9ac5009675c16243361f9a31c88c0c639f391fe3cf71c7544ee67adf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m45nr" Dec 12 17:22:51.684639 kubelet[2721]: E1212 17:22:51.684594 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m45nr_kube-system(8c46f9b9-021e-4630-9626-c64d156571c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m45nr_kube-system(8c46f9b9-021e-4630-9626-c64d156571c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"361fa70d9ac5009675c16243361f9a31c88c0c639f391fe3cf71c7544ee67adf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m45nr" podUID="8c46f9b9-021e-4630-9626-c64d156571c2" Dec 12 17:22:51.698074 containerd[1581]: time="2025-12-12T17:22:51.698000758Z" level=error msg="Failed to destroy network for sandbox \"4a160545b7193cf6b724db4db37926fe79846ea0c9a5cb5cf0ec6845d29782f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.700954 containerd[1581]: time="2025-12-12T17:22:51.700880587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7697776645-lwmh9,Uid:a3f97266-0eeb-48be-82de-a37522413454,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a160545b7193cf6b724db4db37926fe79846ea0c9a5cb5cf0ec6845d29782f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.701221 kubelet[2721]: E1212 17:22:51.701159 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a160545b7193cf6b724db4db37926fe79846ea0c9a5cb5cf0ec6845d29782f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.701282 kubelet[2721]: E1212 17:22:51.701249 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a160545b7193cf6b724db4db37926fe79846ea0c9a5cb5cf0ec6845d29782f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7697776645-lwmh9" Dec 12 17:22:51.701282 kubelet[2721]: E1212 17:22:51.701271 2721 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a160545b7193cf6b724db4db37926fe79846ea0c9a5cb5cf0ec6845d29782f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7697776645-lwmh9" Dec 12 17:22:51.701441 kubelet[2721]: E1212 17:22:51.701322 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7697776645-lwmh9_calico-system(a3f97266-0eeb-48be-82de-a37522413454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7697776645-lwmh9_calico-system(a3f97266-0eeb-48be-82de-a37522413454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a160545b7193cf6b724db4db37926fe79846ea0c9a5cb5cf0ec6845d29782f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7697776645-lwmh9" podUID="a3f97266-0eeb-48be-82de-a37522413454" Dec 12 17:22:51.702673 containerd[1581]: time="2025-12-12T17:22:51.702526255Z" level=error msg="Failed to destroy network for sandbox \"3479f23ef51199ac9eb65b418602da751ce12ed168c16aaed6d7772763500893\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.704159 containerd[1581]: time="2025-12-12T17:22:51.704114639Z" level=error msg="Failed to destroy network for sandbox \"43fe093843ac34c5bd05a8cd4eb1f6daec41d06d4c64174e1b1d46ca05e24d6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.704943 containerd[1581]: time="2025-12-12T17:22:51.704902491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d446d548-w7qwm,Uid:2fb0c1c5-10e8-4dec-a57c-dc20c81a6882,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3479f23ef51199ac9eb65b418602da751ce12ed168c16aaed6d7772763500893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.705455 kubelet[2721]: E1212 17:22:51.705418 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3479f23ef51199ac9eb65b418602da751ce12ed168c16aaed6d7772763500893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.705526 kubelet[2721]: E1212 17:22:51.705482 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3479f23ef51199ac9eb65b418602da751ce12ed168c16aaed6d7772763500893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" Dec 12 17:22:51.705526 kubelet[2721]: E1212 17:22:51.705504 2721 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3479f23ef51199ac9eb65b418602da751ce12ed168c16aaed6d7772763500893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" Dec 12 17:22:51.705569 kubelet[2721]: E1212 17:22:51.705541 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9d446d548-w7qwm_calico-system(2fb0c1c5-10e8-4dec-a57c-dc20c81a6882)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9d446d548-w7qwm_calico-system(2fb0c1c5-10e8-4dec-a57c-dc20c81a6882)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3479f23ef51199ac9eb65b418602da751ce12ed168c16aaed6d7772763500893\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" podUID="2fb0c1c5-10e8-4dec-a57c-dc20c81a6882" Dec 12 17:22:51.707822 containerd[1581]: time="2025-12-12T17:22:51.707781079Z" level=error msg="Failed to destroy network for sandbox \"fe4bcec0585ee69d790113e648ca4631344066a89e9fa45f230d47015354ab2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.708614 containerd[1581]: time="2025-12-12T17:22:51.708578412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-47rgc,Uid:2e84fb14-216b-4920-bea5-4db1089bfe0c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fe093843ac34c5bd05a8cd4eb1f6daec41d06d4c64174e1b1d46ca05e24d6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.709048 kubelet[2721]: E1212 17:22:51.708922 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fe093843ac34c5bd05a8cd4eb1f6daec41d06d4c64174e1b1d46ca05e24d6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.709048 kubelet[2721]: E1212 17:22:51.708989 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fe093843ac34c5bd05a8cd4eb1f6daec41d06d4c64174e1b1d46ca05e24d6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-47rgc" Dec 12 17:22:51.709048 kubelet[2721]: E1212 17:22:51.709019 2721 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fe093843ac34c5bd05a8cd4eb1f6daec41d06d4c64174e1b1d46ca05e24d6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-47rgc" Dec 12 17:22:51.709262 kubelet[2721]: E1212 17:22:51.709231 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-47rgc_calico-system(2e84fb14-216b-4920-bea5-4db1089bfe0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-47rgc_calico-system(2e84fb14-216b-4920-bea5-4db1089bfe0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43fe093843ac34c5bd05a8cd4eb1f6daec41d06d4c64174e1b1d46ca05e24d6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-47rgc" podUID="2e84fb14-216b-4920-bea5-4db1089bfe0c" Dec 12 17:22:51.710438 containerd[1581]: time="2025-12-12T17:22:51.710396851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8546bdfd97-vmgm5,Uid:814ad701-33db-45b4-b87d-3357a1210c6d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4bcec0585ee69d790113e648ca4631344066a89e9fa45f230d47015354ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.710624 kubelet[2721]: E1212 17:22:51.710578 2721 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4bcec0585ee69d790113e648ca4631344066a89e9fa45f230d47015354ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:22:51.710665 kubelet[2721]: E1212 17:22:51.710641 2721 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4bcec0585ee69d790113e648ca4631344066a89e9fa45f230d47015354ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" Dec 12 17:22:51.710765 kubelet[2721]: E1212 17:22:51.710663 2721 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4bcec0585ee69d790113e648ca4631344066a89e9fa45f230d47015354ab2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" Dec 12 17:22:51.710765 kubelet[2721]: E1212 17:22:51.710697 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8546bdfd97-vmgm5_calico-apiserver(814ad701-33db-45b4-b87d-3357a1210c6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8546bdfd97-vmgm5_calico-apiserver(814ad701-33db-45b4-b87d-3357a1210c6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe4bcec0585ee69d790113e648ca4631344066a89e9fa45f230d47015354ab2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" podUID="814ad701-33db-45b4-b87d-3357a1210c6d" Dec 12 17:22:52.410257 systemd[1]: run-netns-cni\x2d97006300\x2d8242\x2d9e8b\x2dfe61\x2dbb859e7410b0.mount: Deactivated successfully. Dec 12 17:22:54.559803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968518559.mount: Deactivated successfully. Dec 12 17:22:54.797327 containerd[1581]: time="2025-12-12T17:22:54.797260357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:54.798291 containerd[1581]: time="2025-12-12T17:22:54.798234934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:22:54.799929 containerd[1581]: time="2025-12-12T17:22:54.799852509Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:54.803923 containerd[1581]: time="2025-12-12T17:22:54.803860183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:22:54.804448 containerd[1581]: time="2025-12-12T17:22:54.804404375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.277347942s" Dec 12 17:22:54.804448 containerd[1581]: time="2025-12-12T17:22:54.804441777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:22:54.816787 containerd[1581]: time="2025-12-12T17:22:54.816130460Z" level=info msg="CreateContainer within sandbox \"b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:22:54.841080 containerd[1581]: time="2025-12-12T17:22:54.840766461Z" level=info msg="Container ee877936c094dba89cb78500beb4ac504d68626ed038d2b65b5b1fd73b0d8edb: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:22:54.854330 containerd[1581]: time="2025-12-12T17:22:54.854273090Z" level=info msg="CreateContainer within sandbox \"b598099f34ac1c418dd8b34790feb9060f9b755b94736e44e52f6131f917325b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ee877936c094dba89cb78500beb4ac504d68626ed038d2b65b5b1fd73b0d8edb\"" Dec 12 17:22:54.854840 containerd[1581]: time="2025-12-12T17:22:54.854795281Z" level=info msg="StartContainer for \"ee877936c094dba89cb78500beb4ac504d68626ed038d2b65b5b1fd73b0d8edb\"" Dec 12 17:22:54.857780 containerd[1581]: time="2025-12-12T17:22:54.857734253Z" level=info msg="connecting to shim ee877936c094dba89cb78500beb4ac504d68626ed038d2b65b5b1fd73b0d8edb" address="unix:///run/containerd/s/2af7180804b08f80d86725b8b1a99a6830937b9aea3eed9733e53bcc018fc249" protocol=ttrpc version=3 Dec 12 17:22:54.875279 systemd[1]: Started cri-containerd-ee877936c094dba89cb78500beb4ac504d68626ed038d2b65b5b1fd73b0d8edb.scope - libcontainer container ee877936c094dba89cb78500beb4ac504d68626ed038d2b65b5b1fd73b0d8edb. Dec 12 17:22:54.939000 audit: BPF prog-id=170 op=LOAD Dec 12 17:22:54.939000 audit[3801]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3263 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:54.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565383737393336633039346462613839636237383530306265623461 Dec 12 17:22:54.939000 audit: BPF prog-id=171 op=LOAD Dec 12 17:22:54.939000 audit[3801]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3263 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:54.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565383737393336633039346462613839636237383530306265623461 Dec 12 17:22:54.939000 audit: BPF prog-id=171 op=UNLOAD Dec 12 17:22:54.939000 audit[3801]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:54.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565383737393336633039346462613839636237383530306265623461 Dec 12 17:22:54.939000 audit: BPF prog-id=170 op=UNLOAD Dec 12 17:22:54.939000 audit[3801]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3263 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:54.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565383737393336633039346462613839636237383530306265623461 Dec 12 17:22:54.939000 audit: BPF prog-id=172 op=LOAD Dec 12 17:22:54.939000 audit[3801]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3263 pid=3801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:54.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565383737393336633039346462613839636237383530306265623461 Dec 12 17:22:54.960257 containerd[1581]: time="2025-12-12T17:22:54.960208284Z" level=info msg="StartContainer for \"ee877936c094dba89cb78500beb4ac504d68626ed038d2b65b5b1fd73b0d8edb\" returns successfully" Dec 12 17:22:55.125961 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:22:55.126116 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:22:55.320289 kubelet[2721]: I1212 17:22:55.320235 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f97266-0eeb-48be-82de-a37522413454-whisker-ca-bundle\") pod \"a3f97266-0eeb-48be-82de-a37522413454\" (UID: \"a3f97266-0eeb-48be-82de-a37522413454\") " Dec 12 17:22:55.320289 kubelet[2721]: I1212 17:22:55.320299 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwrwq\" (UniqueName: \"kubernetes.io/projected/a3f97266-0eeb-48be-82de-a37522413454-kube-api-access-mwrwq\") pod \"a3f97266-0eeb-48be-82de-a37522413454\" (UID: \"a3f97266-0eeb-48be-82de-a37522413454\") " Dec 12 17:22:55.320752 kubelet[2721]: I1212 17:22:55.320345 2721 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a3f97266-0eeb-48be-82de-a37522413454-whisker-backend-key-pair\") pod \"a3f97266-0eeb-48be-82de-a37522413454\" (UID: \"a3f97266-0eeb-48be-82de-a37522413454\") " Dec 12 17:22:55.321241 kubelet[2721]: I1212 17:22:55.321168 2721 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3f97266-0eeb-48be-82de-a37522413454-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a3f97266-0eeb-48be-82de-a37522413454" (UID: "a3f97266-0eeb-48be-82de-a37522413454"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:22:55.334062 kubelet[2721]: I1212 17:22:55.333008 2721 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f97266-0eeb-48be-82de-a37522413454-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a3f97266-0eeb-48be-82de-a37522413454" (UID: "a3f97266-0eeb-48be-82de-a37522413454"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:22:55.334062 kubelet[2721]: I1212 17:22:55.333218 2721 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f97266-0eeb-48be-82de-a37522413454-kube-api-access-mwrwq" (OuterVolumeSpecName: "kube-api-access-mwrwq") pod "a3f97266-0eeb-48be-82de-a37522413454" (UID: "a3f97266-0eeb-48be-82de-a37522413454"). InnerVolumeSpecName "kube-api-access-mwrwq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:22:55.406064 systemd[1]: Removed slice kubepods-besteffort-poda3f97266_0eeb_48be_82de_a37522413454.slice - libcontainer container kubepods-besteffort-poda3f97266_0eeb_48be_82de_a37522413454.slice. Dec 12 17:22:55.421303 kubelet[2721]: I1212 17:22:55.421244 2721 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f97266-0eeb-48be-82de-a37522413454-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:22:55.421303 kubelet[2721]: I1212 17:22:55.421287 2721 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mwrwq\" (UniqueName: \"kubernetes.io/projected/a3f97266-0eeb-48be-82de-a37522413454-kube-api-access-mwrwq\") on node \"localhost\" DevicePath \"\"" Dec 12 17:22:55.421303 kubelet[2721]: I1212 17:22:55.421305 2721 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a3f97266-0eeb-48be-82de-a37522413454-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:22:55.548743 kubelet[2721]: E1212 17:22:55.548693 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:55.560895 systemd[1]: var-lib-kubelet-pods-a3f97266\x2d0eeb\x2d48be\x2d82de\x2da37522413454-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:22:55.561027 systemd[1]: var-lib-kubelet-pods-a3f97266\x2d0eeb\x2d48be\x2d82de\x2da37522413454-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmwrwq.mount: Deactivated successfully. Dec 12 17:22:55.601574 kubelet[2721]: I1212 17:22:55.600976 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kd6nt" podStartSLOduration=1.995501019 podStartE2EDuration="12.600955898s" podCreationTimestamp="2025-12-12 17:22:43 +0000 UTC" firstStartedPulling="2025-12-12 17:22:44.19970526 +0000 UTC m=+22.910509518" lastFinishedPulling="2025-12-12 17:22:54.805160139 +0000 UTC m=+33.515964397" observedRunningTime="2025-12-12 17:22:55.600930017 +0000 UTC m=+34.311734275" watchObservedRunningTime="2025-12-12 17:22:55.600955898 +0000 UTC m=+34.311760156" Dec 12 17:22:55.618016 systemd[1]: Created slice kubepods-besteffort-podb80b51cf_55e4_4f30_8272_528fb61d0936.slice - libcontainer container kubepods-besteffort-podb80b51cf_55e4_4f30_8272_528fb61d0936.slice. Dec 12 17:22:55.624406 kubelet[2721]: I1212 17:22:55.624260 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vfs7\" (UniqueName: \"kubernetes.io/projected/b80b51cf-55e4-4f30-8272-528fb61d0936-kube-api-access-6vfs7\") pod \"whisker-78c6bf566f-r6z8g\" (UID: \"b80b51cf-55e4-4f30-8272-528fb61d0936\") " pod="calico-system/whisker-78c6bf566f-r6z8g" Dec 12 17:22:55.626246 kubelet[2721]: I1212 17:22:55.626190 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b80b51cf-55e4-4f30-8272-528fb61d0936-whisker-ca-bundle\") pod \"whisker-78c6bf566f-r6z8g\" (UID: \"b80b51cf-55e4-4f30-8272-528fb61d0936\") " pod="calico-system/whisker-78c6bf566f-r6z8g" Dec 12 17:22:55.626829 kubelet[2721]: I1212 17:22:55.626757 2721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b80b51cf-55e4-4f30-8272-528fb61d0936-whisker-backend-key-pair\") pod \"whisker-78c6bf566f-r6z8g\" (UID: \"b80b51cf-55e4-4f30-8272-528fb61d0936\") " pod="calico-system/whisker-78c6bf566f-r6z8g" Dec 12 17:22:55.936831 containerd[1581]: time="2025-12-12T17:22:55.936714231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c6bf566f-r6z8g,Uid:b80b51cf-55e4-4f30-8272-528fb61d0936,Namespace:calico-system,Attempt:0,}" Dec 12 17:22:56.548118 kubelet[2721]: I1212 17:22:56.547700 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:22:56.549417 kubelet[2721]: E1212 17:22:56.549093 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:22:56.639002 systemd-networkd[1502]: cali33845592fe8: Link UP Dec 12 17:22:56.641314 systemd-networkd[1502]: cali33845592fe8: Gained carrier Dec 12 17:22:56.656704 containerd[1581]: 2025-12-12 17:22:56.387 [INFO][3864] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:22:56.656704 containerd[1581]: 2025-12-12 17:22:56.423 [INFO][3864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--78c6bf566f--r6z8g-eth0 whisker-78c6bf566f- calico-system b80b51cf-55e4-4f30-8272-528fb61d0936 882 0 2025-12-12 17:22:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78c6bf566f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-78c6bf566f-r6z8g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali33845592fe8 [] [] }} ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Namespace="calico-system" Pod="whisker-78c6bf566f-r6z8g" WorkloadEndpoint="localhost-k8s-whisker--78c6bf566f--r6z8g-" Dec 12 17:22:56.656704 containerd[1581]: 2025-12-12 17:22:56.426 [INFO][3864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Namespace="calico-system" Pod="whisker-78c6bf566f-r6z8g" WorkloadEndpoint="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" Dec 12 17:22:56.656704 containerd[1581]: 2025-12-12 17:22:56.554 [INFO][3878] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" HandleID="k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Workload="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.554 [INFO][3878] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" HandleID="k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Workload="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000326640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-78c6bf566f-r6z8g", "timestamp":"2025-12-12 17:22:56.554267551 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.554 [INFO][3878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.554 [INFO][3878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.555 [INFO][3878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.573 [INFO][3878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" host="localhost" Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.586 [INFO][3878] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.594 [INFO][3878] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.596 [INFO][3878] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.599 [INFO][3878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:22:56.656980 containerd[1581]: 2025-12-12 17:22:56.599 [INFO][3878] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" host="localhost" Dec 12 17:22:56.657551 containerd[1581]: 2025-12-12 17:22:56.603 [INFO][3878] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9 Dec 12 17:22:56.657551 containerd[1581]: 2025-12-12 17:22:56.609 [INFO][3878] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" host="localhost" Dec 12 17:22:56.657551 containerd[1581]: 2025-12-12 17:22:56.620 [INFO][3878] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" host="localhost" Dec 12 17:22:56.657551 containerd[1581]: 2025-12-12 17:22:56.620 [INFO][3878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" host="localhost" Dec 12 17:22:56.657551 containerd[1581]: 2025-12-12 17:22:56.620 [INFO][3878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:22:56.657551 containerd[1581]: 2025-12-12 17:22:56.620 [INFO][3878] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" HandleID="k8s-pod-network.042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Workload="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" Dec 12 17:22:56.657694 containerd[1581]: 2025-12-12 17:22:56.623 [INFO][3864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Namespace="calico-system" Pod="whisker-78c6bf566f-r6z8g" WorkloadEndpoint="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--78c6bf566f--r6z8g-eth0", GenerateName:"whisker-78c6bf566f-", Namespace:"calico-system", SelfLink:"", UID:"b80b51cf-55e4-4f30-8272-528fb61d0936", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c6bf566f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-78c6bf566f-r6z8g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali33845592fe8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:56.657694 containerd[1581]: 2025-12-12 17:22:56.623 [INFO][3864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Namespace="calico-system" Pod="whisker-78c6bf566f-r6z8g" WorkloadEndpoint="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" Dec 12 17:22:56.657765 containerd[1581]: 2025-12-12 17:22:56.623 [INFO][3864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33845592fe8 ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Namespace="calico-system" Pod="whisker-78c6bf566f-r6z8g" WorkloadEndpoint="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" Dec 12 17:22:56.657765 containerd[1581]: 2025-12-12 17:22:56.643 [INFO][3864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Namespace="calico-system" Pod="whisker-78c6bf566f-r6z8g" WorkloadEndpoint="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" Dec 12 17:22:56.657811 containerd[1581]: 2025-12-12 17:22:56.643 [INFO][3864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Namespace="calico-system" Pod="whisker-78c6bf566f-r6z8g" WorkloadEndpoint="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--78c6bf566f--r6z8g-eth0", GenerateName:"whisker-78c6bf566f-", Namespace:"calico-system", SelfLink:"", UID:"b80b51cf-55e4-4f30-8272-528fb61d0936", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c6bf566f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9", Pod:"whisker-78c6bf566f-r6z8g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali33845592fe8", MAC:"fa:13:26:b5:fd:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:22:56.657858 containerd[1581]: 2025-12-12 17:22:56.653 [INFO][3864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" Namespace="calico-system" Pod="whisker-78c6bf566f-r6z8g" WorkloadEndpoint="localhost-k8s-whisker--78c6bf566f--r6z8g-eth0" Dec 12 17:22:56.790195 containerd[1581]: time="2025-12-12T17:22:56.790130072Z" level=info msg="connecting to shim 042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9" address="unix:///run/containerd/s/1fcac4dc6b370f80a0d7d19d063f925868ca1aa2e2f36aed609e1848f9ee4a98" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:22:56.824289 systemd[1]: Started cri-containerd-042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9.scope - libcontainer container 042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9. Dec 12 17:22:56.835000 audit: BPF prog-id=173 op=LOAD Dec 12 17:22:56.837290 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 12 17:22:56.837342 kernel: audit: type=1334 audit(1765560176.835:564): prog-id=173 op=LOAD Dec 12 17:22:56.836000 audit: BPF prog-id=174 op=LOAD Dec 12 17:22:56.838657 kernel: audit: type=1334 audit(1765560176.836:565): prog-id=174 op=LOAD Dec 12 17:22:56.838698 kernel: audit: type=1300 audit(1765560176.836:565): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.836000 audit[4016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.844272 kernel: audit: type=1327 audit(1765560176.836:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.836000 audit: BPF prog-id=174 op=UNLOAD Dec 12 17:22:56.845110 kernel: audit: type=1334 audit(1765560176.836:566): prog-id=174 op=UNLOAD Dec 12 17:22:56.836000 audit[4016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.847938 kernel: audit: type=1300 audit(1765560176.836:566): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.847997 kernel: audit: type=1327 audit(1765560176.836:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.850670 kernel: audit: type=1334 audit(1765560176.836:567): prog-id=175 op=LOAD Dec 12 17:22:56.836000 audit: BPF prog-id=175 op=LOAD Dec 12 17:22:56.851286 kernel: audit: type=1300 audit(1765560176.836:567): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.836000 audit[4016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.853994 kernel: audit: type=1327 audit(1765560176.836:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.837000 audit: BPF prog-id=176 op=LOAD Dec 12 17:22:56.837000 audit[4016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.840000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:22:56.840000 audit[4016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.840000 audit: BPF prog-id=175 op=UNLOAD Dec 12 17:22:56.840000 audit[4016]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.840000 audit: BPF prog-id=177 op=LOAD Dec 12 17:22:56.840000 audit[4016]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4004 pid=4016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:56.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034326339383461323032303532613532653561663161636630306530 Dec 12 17:22:56.856765 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:22:56.877735 containerd[1581]: time="2025-12-12T17:22:56.877672159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c6bf566f-r6z8g,Uid:b80b51cf-55e4-4f30-8272-528fb61d0936,Namespace:calico-system,Attempt:0,} returns sandbox id \"042c984a202052a52e5af1acf00e0765b8e11751200c5aa5bce26c512fe42fe9\"" Dec 12 17:22:56.879400 containerd[1581]: time="2025-12-12T17:22:56.879356210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:22:57.083056 containerd[1581]: time="2025-12-12T17:22:57.082889023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:57.085949 containerd[1581]: time="2025-12-12T17:22:57.085877140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:57.086052 containerd[1581]: time="2025-12-12T17:22:57.085896701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:22:57.086269 kubelet[2721]: E1212 17:22:57.086209 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:22:57.086315 kubelet[2721]: E1212 17:22:57.086272 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:22:57.087376 kubelet[2721]: E1212 17:22:57.087313 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:16fbc09b741f4bad9a7e46252a777791,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vfs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c6bf566f-r6z8g_calico-system(b80b51cf-55e4-4f30-8272-528fb61d0936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:57.089593 containerd[1581]: time="2025-12-12T17:22:57.089547173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:22:57.290282 containerd[1581]: time="2025-12-12T17:22:57.290222253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:22:57.292206 containerd[1581]: time="2025-12-12T17:22:57.292141514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:22:57.292295 containerd[1581]: time="2025-12-12T17:22:57.292248279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:22:57.292507 kubelet[2721]: E1212 17:22:57.292441 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:22:57.292575 kubelet[2721]: E1212 17:22:57.292516 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:22:57.292744 kubelet[2721]: E1212 17:22:57.292642 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vfs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c6bf566f-r6z8g_calico-system(b80b51cf-55e4-4f30-8272-528fb61d0936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:22:57.294436 kubelet[2721]: E1212 17:22:57.294366 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c6bf566f-r6z8g" podUID="b80b51cf-55e4-4f30-8272-528fb61d0936" Dec 12 17:22:57.392636 kubelet[2721]: I1212 17:22:57.392471 2721 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f97266-0eeb-48be-82de-a37522413454" path="/var/lib/kubelet/pods/a3f97266-0eeb-48be-82de-a37522413454/volumes" Dec 12 17:22:57.552612 kubelet[2721]: E1212 17:22:57.552563 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c6bf566f-r6z8g" podUID="b80b51cf-55e4-4f30-8272-528fb61d0936" Dec 12 17:22:57.596000 audit[4044]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:57.596000 audit[4044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe68450c0 a2=0 a3=1 items=0 ppid=2830 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:57.596000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:57.616000 audit[4044]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:22:57.616000 audit[4044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe68450c0 a2=0 a3=1 items=0 ppid=2830 pid=4044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:22:57.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:22:58.526238 systemd-networkd[1502]: cali33845592fe8: Gained IPv6LL Dec 12 17:22:58.554355 kubelet[2721]: E1212 17:22:58.554237 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c6bf566f-r6z8g" podUID="b80b51cf-55e4-4f30-8272-528fb61d0936" Dec 12 17:23:00.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.37:22-10.0.0.1:45694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:00.336307 systemd[1]: Started sshd@7-10.0.0.37:22-10.0.0.1:45694.service - OpenSSH per-connection server daemon (10.0.0.1:45694). Dec 12 17:23:00.399000 audit[4121]: USER_ACCT pid=4121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:00.400524 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 45694 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:00.400000 audit[4121]: CRED_ACQ pid=4121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:00.400000 audit[4121]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9323a50 a2=3 a3=0 items=0 ppid=1 pid=4121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:00.400000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:00.401948 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:00.406727 systemd-logind[1562]: New session 8 of user core. Dec 12 17:23:00.417328 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:23:00.418000 audit[4121]: USER_START pid=4121 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:00.420000 audit[4124]: CRED_ACQ pid=4124 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:00.539792 sshd[4124]: Connection closed by 10.0.0.1 port 45694 Dec 12 17:23:00.540174 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:00.540000 audit[4121]: USER_END pid=4121 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:00.540000 audit[4121]: CRED_DISP pid=4121 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:00.544342 systemd[1]: sshd@7-10.0.0.37:22-10.0.0.1:45694.service: Deactivated successfully. Dec 12 17:23:00.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.37:22-10.0.0.1:45694 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:00.547571 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:23:00.548390 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:23:00.549912 systemd-logind[1562]: Removed session 8. Dec 12 17:23:02.391075 containerd[1581]: time="2025-12-12T17:23:02.391011947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d446d548-w7qwm,Uid:2fb0c1c5-10e8-4dec-a57c-dc20c81a6882,Namespace:calico-system,Attempt:0,}" Dec 12 17:23:02.552807 systemd-networkd[1502]: caliac86fe98db3: Link UP Dec 12 17:23:02.554654 systemd-networkd[1502]: caliac86fe98db3: Gained carrier Dec 12 17:23:02.569773 containerd[1581]: 2025-12-12 17:23:02.451 [INFO][4187] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:23:02.569773 containerd[1581]: 2025-12-12 17:23:02.468 [INFO][4187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0 calico-kube-controllers-9d446d548- calico-system 2fb0c1c5-10e8-4dec-a57c-dc20c81a6882 818 0 2025-12-12 17:22:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9d446d548 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-9d446d548-w7qwm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliac86fe98db3 [] [] }} ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Namespace="calico-system" Pod="calico-kube-controllers-9d446d548-w7qwm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-" Dec 12 17:23:02.569773 containerd[1581]: 2025-12-12 17:23:02.468 [INFO][4187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Namespace="calico-system" Pod="calico-kube-controllers-9d446d548-w7qwm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" Dec 12 17:23:02.569773 containerd[1581]: 2025-12-12 17:23:02.498 [INFO][4200] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" HandleID="k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Workload="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.498 [INFO][4200] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" HandleID="k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Workload="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-9d446d548-w7qwm", "timestamp":"2025-12-12 17:23:02.498361946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.498 [INFO][4200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.498 [INFO][4200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.498 [INFO][4200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.509 [INFO][4200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" host="localhost" Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.515 [INFO][4200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.521 [INFO][4200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.524 [INFO][4200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.528 [INFO][4200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:02.570056 containerd[1581]: 2025-12-12 17:23:02.528 [INFO][4200] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" host="localhost" Dec 12 17:23:02.570372 containerd[1581]: 2025-12-12 17:23:02.532 [INFO][4200] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd Dec 12 17:23:02.570372 containerd[1581]: 2025-12-12 17:23:02.537 [INFO][4200] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" host="localhost" Dec 12 17:23:02.570372 containerd[1581]: 2025-12-12 17:23:02.545 [INFO][4200] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" host="localhost" Dec 12 17:23:02.570372 containerd[1581]: 2025-12-12 17:23:02.545 [INFO][4200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" host="localhost" Dec 12 17:23:02.570372 containerd[1581]: 2025-12-12 17:23:02.545 [INFO][4200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:23:02.570372 containerd[1581]: 2025-12-12 17:23:02.545 [INFO][4200] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" HandleID="k8s-pod-network.5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Workload="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" Dec 12 17:23:02.570489 containerd[1581]: 2025-12-12 17:23:02.548 [INFO][4187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Namespace="calico-system" Pod="calico-kube-controllers-9d446d548-w7qwm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0", GenerateName:"calico-kube-controllers-9d446d548-", Namespace:"calico-system", SelfLink:"", UID:"2fb0c1c5-10e8-4dec-a57c-dc20c81a6882", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d446d548", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-9d446d548-w7qwm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac86fe98db3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:02.570539 containerd[1581]: 2025-12-12 17:23:02.548 [INFO][4187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Namespace="calico-system" Pod="calico-kube-controllers-9d446d548-w7qwm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" Dec 12 17:23:02.570539 containerd[1581]: 2025-12-12 17:23:02.548 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac86fe98db3 ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Namespace="calico-system" Pod="calico-kube-controllers-9d446d548-w7qwm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" Dec 12 17:23:02.570539 containerd[1581]: 2025-12-12 17:23:02.555 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Namespace="calico-system" Pod="calico-kube-controllers-9d446d548-w7qwm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" Dec 12 17:23:02.570606 containerd[1581]: 2025-12-12 17:23:02.555 [INFO][4187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Namespace="calico-system" Pod="calico-kube-controllers-9d446d548-w7qwm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0", GenerateName:"calico-kube-controllers-9d446d548-", Namespace:"calico-system", SelfLink:"", UID:"2fb0c1c5-10e8-4dec-a57c-dc20c81a6882", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d446d548", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd", Pod:"calico-kube-controllers-9d446d548-w7qwm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliac86fe98db3", MAC:"e6:63:b4:13:e1:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:02.570653 containerd[1581]: 2025-12-12 17:23:02.567 [INFO][4187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" Namespace="calico-system" Pod="calico-kube-controllers-9d446d548-w7qwm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--9d446d548--w7qwm-eth0" Dec 12 17:23:02.597187 containerd[1581]: time="2025-12-12T17:23:02.597135799Z" level=info msg="connecting to shim 5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd" address="unix:///run/containerd/s/6f28afaecce45f237c1bd3c9d9bcba67047d5890d6b646ef25eae1b2195b93c3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:23:02.629313 systemd[1]: Started cri-containerd-5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd.scope - libcontainer container 5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd. Dec 12 17:23:02.649000 audit: BPF prog-id=178 op=LOAD Dec 12 17:23:02.651853 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 12 17:23:02.651955 kernel: audit: type=1334 audit(1765560182.649:583): prog-id=178 op=LOAD Dec 12 17:23:02.652000 audit: BPF prog-id=179 op=LOAD Dec 12 17:23:02.652000 audit[4239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.658042 kernel: audit: type=1334 audit(1765560182.652:584): prog-id=179 op=LOAD Dec 12 17:23:02.658116 kernel: audit: type=1300 audit(1765560182.652:584): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.658156 kernel: audit: type=1327 audit(1765560182.652:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.653000 audit: BPF prog-id=179 op=UNLOAD Dec 12 17:23:02.661979 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:23:02.653000 audit[4239]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.665119 kernel: audit: type=1334 audit(1765560182.653:585): prog-id=179 op=UNLOAD Dec 12 17:23:02.665210 kernel: audit: type=1300 audit(1765560182.653:585): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.668999 kernel: audit: type=1327 audit(1765560182.653:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.669117 kernel: audit: type=1334 audit(1765560182.653:586): prog-id=180 op=LOAD Dec 12 17:23:02.653000 audit: BPF prog-id=180 op=LOAD Dec 12 17:23:02.653000 audit[4239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.672988 kernel: audit: type=1300 audit(1765560182.653:586): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.673093 kernel: audit: type=1327 audit(1765560182.653:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.653000 audit: BPF prog-id=181 op=LOAD Dec 12 17:23:02.653000 audit[4239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.653000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:23:02.653000 audit[4239]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.653000 audit: BPF prog-id=180 op=UNLOAD Dec 12 17:23:02.653000 audit[4239]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.653000 audit: BPF prog-id=182 op=LOAD Dec 12 17:23:02.653000 audit[4239]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4228 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:02.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565303065323462656466353565646138396362353962306461316438 Dec 12 17:23:02.696746 containerd[1581]: time="2025-12-12T17:23:02.696604243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d446d548-w7qwm,Uid:2fb0c1c5-10e8-4dec-a57c-dc20c81a6882,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e00e24bedf55eda89cb59b0da1d81817aa4093388f2c441189f1891eb3195dd\"" Dec 12 17:23:02.701780 containerd[1581]: time="2025-12-12T17:23:02.701709673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:23:02.935264 containerd[1581]: time="2025-12-12T17:23:02.935124955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:02.936995 containerd[1581]: time="2025-12-12T17:23:02.936938917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:23:02.937073 containerd[1581]: time="2025-12-12T17:23:02.936990439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:02.937274 kubelet[2721]: E1212 17:23:02.937233 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:02.938175 kubelet[2721]: E1212 17:23:02.937290 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:02.938175 kubelet[2721]: E1212 17:23:02.937429 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhrc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9d446d548-w7qwm_calico-system(2fb0c1c5-10e8-4dec-a57c-dc20c81a6882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:02.939219 kubelet[2721]: E1212 17:23:02.938709 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" podUID="2fb0c1c5-10e8-4dec-a57c-dc20c81a6882" Dec 12 17:23:03.389240 kubelet[2721]: E1212 17:23:03.389198 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:03.389622 containerd[1581]: time="2025-12-12T17:23:03.389589962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvnjr,Uid:1fc307bf-6f57-4071-b244-d3e2353cae78,Namespace:kube-system,Attempt:0,}" Dec 12 17:23:03.495569 systemd-networkd[1502]: cali352a7adab84: Link UP Dec 12 17:23:03.496243 systemd-networkd[1502]: cali352a7adab84: Gained carrier Dec 12 17:23:03.512858 containerd[1581]: 2025-12-12 17:23:03.414 [INFO][4287] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:23:03.512858 containerd[1581]: 2025-12-12 17:23:03.429 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0 coredns-668d6bf9bc- kube-system 1fc307bf-6f57-4071-b244-d3e2353cae78 812 0 2025-12-12 17:22:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-fvnjr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali352a7adab84 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvnjr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvnjr-" Dec 12 17:23:03.512858 containerd[1581]: 2025-12-12 17:23:03.429 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvnjr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" Dec 12 17:23:03.512858 containerd[1581]: 2025-12-12 17:23:03.453 [INFO][4301] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" HandleID="k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Workload="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.453 [INFO][4301] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" HandleID="k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Workload="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-fvnjr", "timestamp":"2025-12-12 17:23:03.453305716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.453 [INFO][4301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.453 [INFO][4301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.453 [INFO][4301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.463 [INFO][4301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" host="localhost" Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.468 [INFO][4301] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.473 [INFO][4301] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.475 [INFO][4301] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.477 [INFO][4301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:03.513344 containerd[1581]: 2025-12-12 17:23:03.477 [INFO][4301] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" host="localhost" Dec 12 17:23:03.513560 containerd[1581]: 2025-12-12 17:23:03.479 [INFO][4301] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd Dec 12 17:23:03.513560 containerd[1581]: 2025-12-12 17:23:03.483 [INFO][4301] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" host="localhost" Dec 12 17:23:03.513560 containerd[1581]: 2025-12-12 17:23:03.488 [INFO][4301] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" host="localhost" Dec 12 17:23:03.513560 containerd[1581]: 2025-12-12 17:23:03.488 [INFO][4301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" host="localhost" Dec 12 17:23:03.513560 containerd[1581]: 2025-12-12 17:23:03.488 [INFO][4301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:23:03.513560 containerd[1581]: 2025-12-12 17:23:03.488 [INFO][4301] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" HandleID="k8s-pod-network.cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Workload="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" Dec 12 17:23:03.513666 containerd[1581]: 2025-12-12 17:23:03.490 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvnjr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1fc307bf-6f57-4071-b244-d3e2353cae78", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-fvnjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352a7adab84", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:03.513717 containerd[1581]: 2025-12-12 17:23:03.491 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvnjr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" Dec 12 17:23:03.513717 containerd[1581]: 2025-12-12 17:23:03.491 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali352a7adab84 ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvnjr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" Dec 12 17:23:03.513717 containerd[1581]: 2025-12-12 17:23:03.497 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvnjr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" Dec 12 17:23:03.513794 containerd[1581]: 2025-12-12 17:23:03.497 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvnjr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1fc307bf-6f57-4071-b244-d3e2353cae78", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd", Pod:"coredns-668d6bf9bc-fvnjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352a7adab84", MAC:"9a:b6:c8:23:c1:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:03.513794 containerd[1581]: 2025-12-12 17:23:03.510 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fvnjr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fvnjr-eth0" Dec 12 17:23:03.520118 kubelet[2721]: I1212 17:23:03.520078 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:23:03.520481 kubelet[2721]: E1212 17:23:03.520463 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:03.539333 containerd[1581]: time="2025-12-12T17:23:03.539277285Z" level=info msg="connecting to shim cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd" address="unix:///run/containerd/s/ac3db57585e1e729ea8426abec6118109fc08da1f58f9df6bcc0491f03f84943" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:23:03.562000 audit[4345]: NETFILTER_CFG table=filter:123 family=2 entries=21 op=nft_register_rule pid=4345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:03.562000 audit[4345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe779a950 a2=0 a3=1 items=0 ppid=2830 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:03.566308 kubelet[2721]: E1212 17:23:03.566266 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:03.566000 audit[4345]: NETFILTER_CFG table=nat:124 family=2 entries=19 op=nft_register_chain pid=4345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:03.568114 kubelet[2721]: E1212 17:23:03.568004 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" podUID="2fb0c1c5-10e8-4dec-a57c-dc20c81a6882" Dec 12 17:23:03.566000 audit[4345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe779a950 a2=0 a3=1 items=0 ppid=2830 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:03.586308 systemd[1]: Started cri-containerd-cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd.scope - libcontainer container cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd. Dec 12 17:23:03.596000 audit: BPF prog-id=183 op=LOAD Dec 12 17:23:03.597000 audit: BPF prog-id=184 op=LOAD Dec 12 17:23:03.597000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4326 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366623861363938346234656336326661626331333333346534633363 Dec 12 17:23:03.597000 audit: BPF prog-id=184 op=UNLOAD Dec 12 17:23:03.597000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4326 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366623861363938346234656336326661626331333333346534633363 Dec 12 17:23:03.597000 audit: BPF prog-id=185 op=LOAD Dec 12 17:23:03.597000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4326 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366623861363938346234656336326661626331333333346534633363 Dec 12 17:23:03.597000 audit: BPF prog-id=186 op=LOAD Dec 12 17:23:03.597000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4326 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366623861363938346234656336326661626331333333346534633363 Dec 12 17:23:03.597000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:23:03.597000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4326 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366623861363938346234656336326661626331333333346534633363 Dec 12 17:23:03.597000 audit: BPF prog-id=185 op=UNLOAD Dec 12 17:23:03.597000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4326 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366623861363938346234656336326661626331333333346534633363 Dec 12 17:23:03.597000 audit: BPF prog-id=187 op=LOAD Dec 12 17:23:03.597000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4326 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:03.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366623861363938346234656336326661626331333333346534633363 Dec 12 17:23:03.599210 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:23:03.666746 containerd[1581]: time="2025-12-12T17:23:03.666512063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fvnjr,Uid:1fc307bf-6f57-4071-b244-d3e2353cae78,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd\"" Dec 12 17:23:03.667678 kubelet[2721]: E1212 17:23:03.667640 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:03.670988 containerd[1581]: time="2025-12-12T17:23:03.670922416Z" level=info msg="CreateContainer within sandbox \"cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:23:03.902182 systemd-networkd[1502]: caliac86fe98db3: Gained IPv6LL Dec 12 17:23:03.930473 containerd[1581]: time="2025-12-12T17:23:03.929477072Z" level=info msg="Container 8ee1d7541cd612a14e8ac0b2d38d2851e6f71239a84f47f2880b80e64403e95d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:23:03.932453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524748033.mount: Deactivated successfully. Dec 12 17:23:03.969046 containerd[1581]: time="2025-12-12T17:23:03.968983724Z" level=info msg="CreateContainer within sandbox \"cfb8a6984b4ec62fabc13334e4c3c2a79bbf986c3bba7fd1409293e368b506cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ee1d7541cd612a14e8ac0b2d38d2851e6f71239a84f47f2880b80e64403e95d\"" Dec 12 17:23:03.970066 containerd[1581]: time="2025-12-12T17:23:03.969792159Z" level=info msg="StartContainer for \"8ee1d7541cd612a14e8ac0b2d38d2851e6f71239a84f47f2880b80e64403e95d\"" Dec 12 17:23:03.970854 containerd[1581]: time="2025-12-12T17:23:03.970827844Z" level=info msg="connecting to shim 8ee1d7541cd612a14e8ac0b2d38d2851e6f71239a84f47f2880b80e64403e95d" address="unix:///run/containerd/s/ac3db57585e1e729ea8426abec6118109fc08da1f58f9df6bcc0491f03f84943" protocol=ttrpc version=3 Dec 12 17:23:03.996287 systemd[1]: Started cri-containerd-8ee1d7541cd612a14e8ac0b2d38d2851e6f71239a84f47f2880b80e64403e95d.scope - libcontainer container 8ee1d7541cd612a14e8ac0b2d38d2851e6f71239a84f47f2880b80e64403e95d. Dec 12 17:23:04.012000 audit: BPF prog-id=188 op=LOAD Dec 12 17:23:04.013000 audit: BPF prog-id=189 op=LOAD Dec 12 17:23:04.013000 audit[4374]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4326 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653164373534316364363132613134653861633062326433386432 Dec 12 17:23:04.013000 audit: BPF prog-id=189 op=UNLOAD Dec 12 17:23:04.013000 audit[4374]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4326 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653164373534316364363132613134653861633062326433386432 Dec 12 17:23:04.013000 audit: BPF prog-id=190 op=LOAD Dec 12 17:23:04.013000 audit[4374]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4326 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653164373534316364363132613134653861633062326433386432 Dec 12 17:23:04.013000 audit: BPF prog-id=191 op=LOAD Dec 12 17:23:04.013000 audit[4374]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4326 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653164373534316364363132613134653861633062326433386432 Dec 12 17:23:04.013000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:23:04.013000 audit[4374]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4326 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653164373534316364363132613134653861633062326433386432 Dec 12 17:23:04.013000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:23:04.013000 audit[4374]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4326 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653164373534316364363132613134653861633062326433386432 Dec 12 17:23:04.014000 audit: BPF prog-id=192 op=LOAD Dec 12 17:23:04.014000 audit[4374]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4326 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.014000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865653164373534316364363132613134653861633062326433386432 Dec 12 17:23:04.035000 audit: BPF prog-id=193 op=LOAD Dec 12 17:23:04.035000 audit[4407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0d101c8 a2=98 a3=ffffd0d101b8 items=0 ppid=4365 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.035000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:23:04.035000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:23:04.035000 audit[4407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd0d10198 a3=0 items=0 ppid=4365 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.035000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:23:04.035000 audit: BPF prog-id=194 op=LOAD Dec 12 17:23:04.035000 audit[4407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0d10078 a2=74 a3=95 items=0 ppid=4365 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.035000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:23:04.035000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:23:04.035000 audit[4407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4365 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.035000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:23:04.035000 audit: BPF prog-id=195 op=LOAD Dec 12 17:23:04.035000 audit[4407]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0d100a8 a2=40 a3=ffffd0d100d8 items=0 ppid=4365 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.035000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:23:04.035000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:23:04.035000 audit[4407]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd0d100d8 items=0 ppid=4365 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.035000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:23:04.038000 audit: BPF prog-id=196 op=LOAD Dec 12 17:23:04.038000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcae290a8 a2=98 a3=ffffcae29098 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.038000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:23:04.038000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcae29078 a3=0 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.038000 audit: BPF prog-id=197 op=LOAD Dec 12 17:23:04.038000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcae28d38 a2=74 a3=95 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.038000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:23:04.038000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.038000 audit: BPF prog-id=198 op=LOAD Dec 12 17:23:04.038000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcae28d98 a2=94 a3=2 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.038000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:23:04.038000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.042868 containerd[1581]: time="2025-12-12T17:23:04.042289130Z" level=info msg="StartContainer for \"8ee1d7541cd612a14e8ac0b2d38d2851e6f71239a84f47f2880b80e64403e95d\" returns successfully" Dec 12 17:23:04.140000 audit: BPF prog-id=199 op=LOAD Dec 12 17:23:04.140000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcae28d58 a2=40 a3=ffffcae28d88 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.140000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:23:04.140000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffcae28d88 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.140000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.165000 audit: BPF prog-id=200 op=LOAD Dec 12 17:23:04.165000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcae28d68 a2=94 a3=4 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.165000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:23:04.165000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.165000 audit: BPF prog-id=201 op=LOAD Dec 12 17:23:04.165000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcae28ba8 a2=94 a3=5 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.165000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:23:04.165000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.165000 audit: BPF prog-id=202 op=LOAD Dec 12 17:23:04.165000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcae28dd8 a2=94 a3=6 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.167000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:23:04.167000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.167000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.167000 audit: BPF prog-id=203 op=LOAD Dec 12 17:23:04.167000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcae285a8 a2=94 a3=83 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.167000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.167000 audit: BPF prog-id=204 op=LOAD Dec 12 17:23:04.167000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffcae28368 a2=94 a3=2 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.167000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.167000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:23:04.167000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.167000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.168000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:23:04.168000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=361e1620 a3=361d4b00 items=0 ppid=4365 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.168000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:23:04.186000 audit: BPF prog-id=205 op=LOAD Dec 12 17:23:04.186000 audit[4435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff9a9a7b8 a2=98 a3=fffff9a9a7a8 items=0 ppid=4365 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.186000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:23:04.186000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:23:04.186000 audit[4435]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff9a9a788 a3=0 items=0 ppid=4365 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.186000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:23:04.186000 audit: BPF prog-id=206 op=LOAD Dec 12 17:23:04.186000 audit[4435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff9a9a668 a2=74 a3=95 items=0 ppid=4365 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.186000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:23:04.186000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:23:04.186000 audit[4435]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4365 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.186000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:23:04.186000 audit: BPF prog-id=207 op=LOAD Dec 12 17:23:04.186000 audit[4435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff9a9a698 a2=40 a3=fffff9a9a6c8 items=0 ppid=4365 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.186000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:23:04.186000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:23:04.186000 audit[4435]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff9a9a6c8 items=0 ppid=4365 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.186000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:23:04.285965 systemd-networkd[1502]: vxlan.calico: Link UP Dec 12 17:23:04.285975 systemd-networkd[1502]: vxlan.calico: Gained carrier Dec 12 17:23:04.313000 audit: BPF prog-id=208 op=LOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0b239a8 a2=98 a3=ffffc0b23998 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc0b23978 a3=0 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=209 op=LOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0b23688 a2=74 a3=95 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=209 op=UNLOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=210 op=LOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0b236e8 a2=94 a3=2 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=211 op=LOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0b23568 a2=40 a3=ffffc0b23598 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc0b23598 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=212 op=LOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0b236b8 a2=94 a3=b7 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.313000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:23:04.313000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.313000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.314000 audit: BPF prog-id=213 op=LOAD Dec 12 17:23:04.314000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0b22d68 a2=94 a3=2 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.314000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.314000 audit: BPF prog-id=213 op=UNLOAD Dec 12 17:23:04.314000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.314000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.314000 audit: BPF prog-id=214 op=LOAD Dec 12 17:23:04.314000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0b22ef8 a2=94 a3=30 items=0 ppid=4365 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.314000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:23:04.321000 audit: BPF prog-id=215 op=LOAD Dec 12 17:23:04.321000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf24c5a8 a2=98 a3=ffffcf24c598 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.321000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:23:04.321000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcf24c578 a3=0 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.322000 audit: BPF prog-id=216 op=LOAD Dec 12 17:23:04.322000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf24c238 a2=74 a3=95 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.322000 audit: BPF prog-id=216 op=UNLOAD Dec 12 17:23:04.322000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.322000 audit: BPF prog-id=217 op=LOAD Dec 12 17:23:04.322000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf24c298 a2=94 a3=2 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.322000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:23:04.322000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.390557 containerd[1581]: time="2025-12-12T17:23:04.390498151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7zbf,Uid:1e0e30e4-6fdf-475c-9a4a-59287d927d5d,Namespace:calico-system,Attempt:0,}" Dec 12 17:23:04.420000 audit: BPF prog-id=218 op=LOAD Dec 12 17:23:04.420000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcf24c258 a2=40 a3=ffffcf24c288 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.420000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.420000 audit: BPF prog-id=218 op=UNLOAD Dec 12 17:23:04.420000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffcf24c288 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.420000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.430000 audit: BPF prog-id=219 op=LOAD Dec 12 17:23:04.430000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcf24c268 a2=94 a3=4 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.430000 audit: BPF prog-id=219 op=UNLOAD Dec 12 17:23:04.430000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.430000 audit: BPF prog-id=220 op=LOAD Dec 12 17:23:04.430000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcf24c0a8 a2=94 a3=5 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.431000 audit: BPF prog-id=220 op=UNLOAD Dec 12 17:23:04.431000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.431000 audit: BPF prog-id=221 op=LOAD Dec 12 17:23:04.431000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcf24c2d8 a2=94 a3=6 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.431000 audit: BPF prog-id=221 op=UNLOAD Dec 12 17:23:04.431000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.431000 audit: BPF prog-id=222 op=LOAD Dec 12 17:23:04.431000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcf24baa8 a2=94 a3=83 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.431000 audit: BPF prog-id=223 op=LOAD Dec 12 17:23:04.431000 audit[4491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffcf24b868 a2=94 a3=2 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.431000 audit: BPF prog-id=223 op=UNLOAD Dec 12 17:23:04.431000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.432000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:23:04.432000 audit[4491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=23435620 a3=23428b00 items=0 ppid=4365 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.432000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:23:04.441000 audit: BPF prog-id=214 op=UNLOAD Dec 12 17:23:04.441000 audit[4365]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000dc1600 a2=0 a3=0 items=0 ppid=3884 pid=4365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.441000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:23:04.498000 audit[4544]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4544 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:04.498000 audit[4544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff8d1f280 a2=0 a3=ffff9373efa8 items=0 ppid=4365 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.498000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:04.500000 audit[4542]: NETFILTER_CFG table=nat:126 family=2 entries=15 op=nft_register_chain pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:04.500000 audit[4542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc241cde0 a2=0 a3=ffffbe9c0fa8 items=0 ppid=4365 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.500000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:04.505000 audit[4541]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4541 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:04.505000 audit[4541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffffddb1f10 a2=0 a3=ffff9cd20fa8 items=0 ppid=4365 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.505000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:04.524471 systemd-networkd[1502]: calid039a3b3814: Link UP Dec 12 17:23:04.525289 systemd-networkd[1502]: calid039a3b3814: Gained carrier Dec 12 17:23:04.516000 audit[4549]: NETFILTER_CFG table=filter:128 family=2 entries=162 op=nft_register_chain pid=4549 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:04.516000 audit[4549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=92296 a0=3 a1=ffffed6b9dc0 a2=0 a3=ffffaf711fa8 items=0 ppid=4365 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.516000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.439 [INFO][4498] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c7zbf-eth0 csi-node-driver- calico-system 1e0e30e4-6fdf-475c-9a4a-59287d927d5d 707 0 2025-12-12 17:22:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c7zbf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid039a3b3814 [] [] }} ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Namespace="calico-system" Pod="csi-node-driver-c7zbf" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7zbf-" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.439 [INFO][4498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Namespace="calico-system" Pod="csi-node-driver-c7zbf" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7zbf-eth0" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.474 [INFO][4515] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" HandleID="k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Workload="localhost-k8s-csi--node--driver--c7zbf-eth0" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.474 [INFO][4515] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" HandleID="k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Workload="localhost-k8s-csi--node--driver--c7zbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c7zbf", "timestamp":"2025-12-12 17:23:04.474010796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.474 [INFO][4515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.474 [INFO][4515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.474 [INFO][4515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.485 [INFO][4515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.492 [INFO][4515] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.500 [INFO][4515] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.504 [INFO][4515] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.507 [INFO][4515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.507 [INFO][4515] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.509 [INFO][4515] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996 Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.513 [INFO][4515] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.520 [INFO][4515] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.520 [INFO][4515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" host="localhost" Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.520 [INFO][4515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:23:04.542275 containerd[1581]: 2025-12-12 17:23:04.520 [INFO][4515] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" HandleID="k8s-pod-network.56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Workload="localhost-k8s-csi--node--driver--c7zbf-eth0" Dec 12 17:23:04.543388 containerd[1581]: 2025-12-12 17:23:04.522 [INFO][4498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Namespace="calico-system" Pod="csi-node-driver-c7zbf" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7zbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c7zbf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e0e30e4-6fdf-475c-9a4a-59287d927d5d", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c7zbf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid039a3b3814", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:04.543388 containerd[1581]: 2025-12-12 17:23:04.522 [INFO][4498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Namespace="calico-system" Pod="csi-node-driver-c7zbf" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7zbf-eth0" Dec 12 17:23:04.543388 containerd[1581]: 2025-12-12 17:23:04.522 [INFO][4498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid039a3b3814 ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Namespace="calico-system" Pod="csi-node-driver-c7zbf" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7zbf-eth0" Dec 12 17:23:04.543388 containerd[1581]: 2025-12-12 17:23:04.524 [INFO][4498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Namespace="calico-system" Pod="csi-node-driver-c7zbf" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7zbf-eth0" Dec 12 17:23:04.543388 containerd[1581]: 2025-12-12 17:23:04.525 [INFO][4498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Namespace="calico-system" Pod="csi-node-driver-c7zbf" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7zbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c7zbf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1e0e30e4-6fdf-475c-9a4a-59287d927d5d", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996", Pod:"csi-node-driver-c7zbf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid039a3b3814", MAC:"3a:fd:79:dd:87:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:04.543388 containerd[1581]: 2025-12-12 17:23:04.537 [INFO][4498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" Namespace="calico-system" Pod="csi-node-driver-c7zbf" WorkloadEndpoint="localhost-k8s-csi--node--driver--c7zbf-eth0" Dec 12 17:23:04.565000 audit[4565]: NETFILTER_CFG table=filter:129 family=2 entries=40 op=nft_register_chain pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:04.565000 audit[4565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20748 a0=3 a1=ffffe25c4a00 a2=0 a3=ffff91436fa8 items=0 ppid=4365 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.565000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:04.568715 containerd[1581]: time="2025-12-12T17:23:04.568675356Z" level=info msg="connecting to shim 56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996" address="unix:///run/containerd/s/ce79f2ed6875f36bac661401100cea24124862b980ded43660b7e0184a1ba96f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:23:04.572505 kubelet[2721]: E1212 17:23:04.571913 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" podUID="2fb0c1c5-10e8-4dec-a57c-dc20c81a6882" Dec 12 17:23:04.572505 kubelet[2721]: E1212 17:23:04.572254 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:04.627296 systemd[1]: Started cri-containerd-56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996.scope - libcontainer container 56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996. Dec 12 17:23:04.626000 audit[4600]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:04.626000 audit[4600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe6d17980 a2=0 a3=1 items=0 ppid=2830 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:04.635000 audit[4600]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:04.635000 audit[4600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe6d17980 a2=0 a3=1 items=0 ppid=2830 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:04.638000 audit: BPF prog-id=224 op=LOAD Dec 12 17:23:04.639000 audit: BPF prog-id=225 op=LOAD Dec 12 17:23:04.639000 audit[4586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=4573 pid=4586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536616131316564353639653035643065363237333833363333643430 Dec 12 17:23:04.639000 audit: BPF prog-id=225 op=UNLOAD Dec 12 17:23:04.639000 audit[4586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536616131316564353639653035643065363237333833363333643430 Dec 12 17:23:04.639000 audit: BPF prog-id=226 op=LOAD Dec 12 17:23:04.639000 audit[4586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=4573 pid=4586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536616131316564353639653035643065363237333833363333643430 Dec 12 17:23:04.639000 audit: BPF prog-id=227 op=LOAD Dec 12 17:23:04.639000 audit[4586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=4573 pid=4586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536616131316564353639653035643065363237333833363333643430 Dec 12 17:23:04.639000 audit: BPF prog-id=227 op=UNLOAD Dec 12 17:23:04.639000 audit[4586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536616131316564353639653035643065363237333833363333643430 Dec 12 17:23:04.639000 audit: BPF prog-id=226 op=UNLOAD Dec 12 17:23:04.639000 audit[4586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4573 pid=4586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536616131316564353639653035643065363237333833363333643430 Dec 12 17:23:04.640000 audit: BPF prog-id=228 op=LOAD Dec 12 17:23:04.640000 audit[4586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=4573 pid=4586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:04.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536616131316564353639653035643065363237333833363333643430 Dec 12 17:23:04.641657 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:23:04.660888 containerd[1581]: time="2025-12-12T17:23:04.660841170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7zbf,Uid:1e0e30e4-6fdf-475c-9a4a-59287d927d5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"56aa11ed569e05d0e627383633d405aa70683ef28e3176ddcd4a96cf6f7d2996\"" Dec 12 17:23:04.662677 containerd[1581]: time="2025-12-12T17:23:04.662595965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:23:04.857327 containerd[1581]: time="2025-12-12T17:23:04.857278754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:04.864194 containerd[1581]: time="2025-12-12T17:23:04.864143407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:23:04.864311 containerd[1581]: time="2025-12-12T17:23:04.864218410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:04.864420 kubelet[2721]: E1212 17:23:04.864379 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:04.864463 kubelet[2721]: E1212 17:23:04.864432 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:04.864579 kubelet[2721]: E1212 17:23:04.864544 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8stxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7zbf_calico-system(1e0e30e4-6fdf-475c-9a4a-59287d927d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:04.866505 containerd[1581]: time="2025-12-12T17:23:04.866473826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:23:05.052616 containerd[1581]: time="2025-12-12T17:23:05.052369666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:05.053703 containerd[1581]: time="2025-12-12T17:23:05.053459391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:23:05.053826 containerd[1581]: time="2025-12-12T17:23:05.053536194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:05.053975 kubelet[2721]: E1212 17:23:05.053930 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:05.054067 kubelet[2721]: E1212 17:23:05.053997 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:05.054247 kubelet[2721]: E1212 17:23:05.054205 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8stxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7zbf_calico-system(1e0e30e4-6fdf-475c-9a4a-59287d927d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:05.055601 kubelet[2721]: E1212 17:23:05.055536 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:23:05.310668 systemd-networkd[1502]: cali352a7adab84: Gained IPv6LL Dec 12 17:23:05.557095 systemd[1]: Started sshd@8-10.0.0.37:22-10.0.0.1:38410.service - OpenSSH per-connection server daemon (10.0.0.1:38410). Dec 12 17:23:05.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.37:22-10.0.0.1:38410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:05.576354 kubelet[2721]: E1212 17:23:05.576321 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:05.576667 kubelet[2721]: E1212 17:23:05.576386 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:23:05.614669 kubelet[2721]: I1212 17:23:05.614594 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fvnjr" podStartSLOduration=38.614574249 podStartE2EDuration="38.614574249s" podCreationTimestamp="2025-12-12 17:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:23:04.625057643 +0000 UTC m=+43.335861901" watchObservedRunningTime="2025-12-12 17:23:05.614574249 +0000 UTC m=+44.325378507" Dec 12 17:23:05.633000 audit[4616]: USER_ACCT pid=4616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:05.634358 sshd[4616]: Accepted publickey for core from 10.0.0.1 port 38410 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:05.636000 audit[4616]: CRED_ACQ pid=4616 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:05.636000 audit[4616]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc2352f0 a2=3 a3=0 items=0 ppid=1 pid=4616 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:05.636000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:05.638447 sshd-session[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:05.641000 audit[4620]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:05.641000 audit[4620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff6fe8910 a2=0 a3=1 items=0 ppid=2830 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:05.641000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:05.644553 systemd-logind[1562]: New session 9 of user core. Dec 12 17:23:05.648000 audit[4620]: NETFILTER_CFG table=nat:133 family=2 entries=35 op=nft_register_chain pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:05.648000 audit[4620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff6fe8910 a2=0 a3=1 items=0 ppid=2830 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:05.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:05.653234 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:23:05.654000 audit[4616]: USER_START pid=4616 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:05.655000 audit[4621]: CRED_ACQ pid=4621 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:05.788331 sshd[4621]: Connection closed by 10.0.0.1 port 38410 Dec 12 17:23:05.788700 sshd-session[4616]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:05.788000 audit[4616]: USER_END pid=4616 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:05.788000 audit[4616]: CRED_DISP pid=4616 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:05.791982 systemd[1]: sshd@8-10.0.0.37:22-10.0.0.1:38410.service: Deactivated successfully. Dec 12 17:23:05.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.37:22-10.0.0.1:38410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:05.793792 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:23:05.795239 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:23:05.796407 systemd-logind[1562]: Removed session 9. Dec 12 17:23:05.822372 systemd-networkd[1502]: calid039a3b3814: Gained IPv6LL Dec 12 17:23:06.142217 systemd-networkd[1502]: vxlan.calico: Gained IPv6LL Dec 12 17:23:06.389511 kubelet[2721]: E1212 17:23:06.389416 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:06.390025 containerd[1581]: time="2025-12-12T17:23:06.389757978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m45nr,Uid:8c46f9b9-021e-4630-9626-c64d156571c2,Namespace:kube-system,Attempt:0,}" Dec 12 17:23:06.390464 containerd[1581]: time="2025-12-12T17:23:06.390220876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8546bdfd97-vmgm5,Uid:814ad701-33db-45b4-b87d-3357a1210c6d,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:23:06.390464 containerd[1581]: time="2025-12-12T17:23:06.390282919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8546bdfd97-9gwkk,Uid:84025a67-a460-4acb-8e7e-d73c2b743a45,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:23:06.529763 systemd-networkd[1502]: cali191cbcdd9cf: Link UP Dec 12 17:23:06.530538 systemd-networkd[1502]: cali191cbcdd9cf: Gained carrier Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.457 [INFO][4643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m45nr-eth0 coredns-668d6bf9bc- kube-system 8c46f9b9-021e-4630-9626-c64d156571c2 817 0 2025-12-12 17:22:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m45nr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali191cbcdd9cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Namespace="kube-system" Pod="coredns-668d6bf9bc-m45nr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m45nr-" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.457 [INFO][4643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Namespace="kube-system" Pod="coredns-668d6bf9bc-m45nr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.486 [INFO][4687] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" HandleID="k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Workload="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.486 [INFO][4687] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" HandleID="k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Workload="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137700), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m45nr", "timestamp":"2025-12-12 17:23:06.486656509 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.486 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.486 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.486 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.498 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.502 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.507 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.509 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.511 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.511 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.513 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.517 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.523 [INFO][4687] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.523 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" host="localhost" Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.523 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:23:06.543530 containerd[1581]: 2025-12-12 17:23:06.523 [INFO][4687] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" HandleID="k8s-pod-network.3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Workload="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" Dec 12 17:23:06.544283 containerd[1581]: 2025-12-12 17:23:06.527 [INFO][4643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Namespace="kube-system" Pod="coredns-668d6bf9bc-m45nr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m45nr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c46f9b9-021e-4630-9626-c64d156571c2", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m45nr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali191cbcdd9cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:06.544283 containerd[1581]: 2025-12-12 17:23:06.527 [INFO][4643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Namespace="kube-system" Pod="coredns-668d6bf9bc-m45nr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" Dec 12 17:23:06.544283 containerd[1581]: 2025-12-12 17:23:06.527 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali191cbcdd9cf ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Namespace="kube-system" Pod="coredns-668d6bf9bc-m45nr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" Dec 12 17:23:06.544283 containerd[1581]: 2025-12-12 17:23:06.530 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Namespace="kube-system" Pod="coredns-668d6bf9bc-m45nr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" Dec 12 17:23:06.544283 containerd[1581]: 2025-12-12 17:23:06.530 [INFO][4643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Namespace="kube-system" Pod="coredns-668d6bf9bc-m45nr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m45nr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c46f9b9-021e-4630-9626-c64d156571c2", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d", Pod:"coredns-668d6bf9bc-m45nr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali191cbcdd9cf", MAC:"86:6e:2c:f8:fb:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:06.544283 containerd[1581]: 2025-12-12 17:23:06.541 [INFO][4643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" Namespace="kube-system" Pod="coredns-668d6bf9bc-m45nr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m45nr-eth0" Dec 12 17:23:06.555000 audit[4715]: NETFILTER_CFG table=filter:134 family=2 entries=40 op=nft_register_chain pid=4715 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:06.555000 audit[4715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20328 a0=3 a1=ffffe20156d0 a2=0 a3=ffff83859fa8 items=0 ppid=4365 pid=4715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.555000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:06.569847 containerd[1581]: time="2025-12-12T17:23:06.569790762Z" level=info msg="connecting to shim 3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d" address="unix:///run/containerd/s/64de22f204c2f03e0fa5d6820ee2c11b8c229e940f04c2d385d62a027a364cb1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:23:06.578115 kubelet[2721]: E1212 17:23:06.577224 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:06.580814 kubelet[2721]: E1212 17:23:06.580446 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:23:06.615237 systemd[1]: Started cri-containerd-3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d.scope - libcontainer container 3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d. Dec 12 17:23:06.626000 audit: BPF prog-id=229 op=LOAD Dec 12 17:23:06.627000 audit: BPF prog-id=230 op=LOAD Dec 12 17:23:06.627000 audit[4736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4725 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626539363937323532613533353235353733363834383965373137 Dec 12 17:23:06.627000 audit: BPF prog-id=230 op=UNLOAD Dec 12 17:23:06.627000 audit[4736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626539363937323532613533353235353733363834383965373137 Dec 12 17:23:06.627000 audit: BPF prog-id=231 op=LOAD Dec 12 17:23:06.627000 audit[4736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4725 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626539363937323532613533353235353733363834383965373137 Dec 12 17:23:06.627000 audit: BPF prog-id=232 op=LOAD Dec 12 17:23:06.627000 audit[4736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4725 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626539363937323532613533353235353733363834383965373137 Dec 12 17:23:06.627000 audit: BPF prog-id=232 op=UNLOAD Dec 12 17:23:06.627000 audit[4736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626539363937323532613533353235353733363834383965373137 Dec 12 17:23:06.627000 audit: BPF prog-id=231 op=UNLOAD Dec 12 17:23:06.627000 audit[4736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626539363937323532613533353235353733363834383965373137 Dec 12 17:23:06.627000 audit: BPF prog-id=233 op=LOAD Dec 12 17:23:06.627000 audit[4736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4725 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366626539363937323532613533353235353733363834383965373137 Dec 12 17:23:06.629839 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:23:06.649989 systemd-networkd[1502]: calied201f9cda6: Link UP Dec 12 17:23:06.655750 systemd-networkd[1502]: calied201f9cda6: Gained carrier Dec 12 17:23:06.667636 containerd[1581]: time="2025-12-12T17:23:06.667587210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m45nr,Uid:8c46f9b9-021e-4630-9626-c64d156571c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d\"" Dec 12 17:23:06.668498 kubelet[2721]: E1212 17:23:06.668474 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:06.675911 containerd[1581]: time="2025-12-12T17:23:06.674997551Z" level=info msg="CreateContainer within sandbox \"3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.455 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0 calico-apiserver-8546bdfd97- calico-apiserver 84025a67-a460-4acb-8e7e-d73c2b743a45 820 0 2025-12-12 17:22:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8546bdfd97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8546bdfd97-9gwkk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied201f9cda6 [] [] }} ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-9gwkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.455 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-9gwkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.487 [INFO][4682] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" HandleID="k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Workload="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.487 [INFO][4682] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" HandleID="k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Workload="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003555c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8546bdfd97-9gwkk", "timestamp":"2025-12-12 17:23:06.487732113 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.487 [INFO][4682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.523 [INFO][4682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.523 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.600 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.607 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.617 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.620 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.623 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.623 [INFO][4682] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.625 [INFO][4682] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34 Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.631 [INFO][4682] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.639 [INFO][4682] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.639 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" host="localhost" Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.639 [INFO][4682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:23:06.679148 containerd[1581]: 2025-12-12 17:23:06.639 [INFO][4682] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" HandleID="k8s-pod-network.6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Workload="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" Dec 12 17:23:06.679686 containerd[1581]: 2025-12-12 17:23:06.642 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-9gwkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0", GenerateName:"calico-apiserver-8546bdfd97-", Namespace:"calico-apiserver", SelfLink:"", UID:"84025a67-a460-4acb-8e7e-d73c2b743a45", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8546bdfd97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8546bdfd97-9gwkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied201f9cda6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:06.679686 containerd[1581]: 2025-12-12 17:23:06.642 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-9gwkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" Dec 12 17:23:06.679686 containerd[1581]: 2025-12-12 17:23:06.642 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied201f9cda6 ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-9gwkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" Dec 12 17:23:06.679686 containerd[1581]: 2025-12-12 17:23:06.657 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-9gwkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" Dec 12 17:23:06.679686 containerd[1581]: 2025-12-12 17:23:06.658 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-9gwkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0", GenerateName:"calico-apiserver-8546bdfd97-", Namespace:"calico-apiserver", SelfLink:"", UID:"84025a67-a460-4acb-8e7e-d73c2b743a45", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8546bdfd97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34", Pod:"calico-apiserver-8546bdfd97-9gwkk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied201f9cda6", MAC:"0a:c7:38:93:2c:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:06.679686 containerd[1581]: 2025-12-12 17:23:06.674 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-9gwkk" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--9gwkk-eth0" Dec 12 17:23:06.685000 audit[4775]: NETFILTER_CFG table=filter:135 family=2 entries=62 op=nft_register_chain pid=4775 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:06.685000 audit[4775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31756 a0=3 a1=fffffbf61a10 a2=0 a3=ffffacc13fa8 items=0 ppid=4365 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.685000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:06.687907 containerd[1581]: time="2025-12-12T17:23:06.687859593Z" level=info msg="Container 3b9ffc1cf8f8c9cfde40f7050a95579c1efee4d3c3b4f3188ae2a698e895a5c1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:23:06.692640 containerd[1581]: time="2025-12-12T17:23:06.692589184Z" level=info msg="CreateContainer within sandbox \"3fbe9697252a5352557368489e717267b01d5d768d4d0d9c349c298b641d7d0d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3b9ffc1cf8f8c9cfde40f7050a95579c1efee4d3c3b4f3188ae2a698e895a5c1\"" Dec 12 17:23:06.694026 containerd[1581]: time="2025-12-12T17:23:06.693996762Z" level=info msg="StartContainer for \"3b9ffc1cf8f8c9cfde40f7050a95579c1efee4d3c3b4f3188ae2a698e895a5c1\"" Dec 12 17:23:06.694851 containerd[1581]: time="2025-12-12T17:23:06.694828035Z" level=info msg="connecting to shim 3b9ffc1cf8f8c9cfde40f7050a95579c1efee4d3c3b4f3188ae2a698e895a5c1" address="unix:///run/containerd/s/64de22f204c2f03e0fa5d6820ee2c11b8c229e940f04c2d385d62a027a364cb1" protocol=ttrpc version=3 Dec 12 17:23:06.717254 systemd[1]: Started cri-containerd-3b9ffc1cf8f8c9cfde40f7050a95579c1efee4d3c3b4f3188ae2a698e895a5c1.scope - libcontainer container 3b9ffc1cf8f8c9cfde40f7050a95579c1efee4d3c3b4f3188ae2a698e895a5c1. Dec 12 17:23:06.721493 containerd[1581]: time="2025-12-12T17:23:06.721433315Z" level=info msg="connecting to shim 6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34" address="unix:///run/containerd/s/06ddc7c418bdb2d3c4bec990f3f4572f8e476c0dc95e80bd60d9d6cee1cee946" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:23:06.751642 systemd-networkd[1502]: cali5d45b178ab1: Link UP Dec 12 17:23:06.752550 systemd-networkd[1502]: cali5d45b178ab1: Gained carrier Dec 12 17:23:06.753247 systemd[1]: Started cri-containerd-6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34.scope - libcontainer container 6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34. Dec 12 17:23:06.755000 audit: BPF prog-id=234 op=LOAD Dec 12 17:23:06.756000 audit: BPF prog-id=235 op=LOAD Dec 12 17:23:06.756000 audit[4776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=4725 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396666633163663866386339636664653430663730353061393535 Dec 12 17:23:06.756000 audit: BPF prog-id=235 op=UNLOAD Dec 12 17:23:06.756000 audit[4776]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396666633163663866386339636664653430663730353061393535 Dec 12 17:23:06.756000 audit: BPF prog-id=236 op=LOAD Dec 12 17:23:06.756000 audit[4776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=4725 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396666633163663866386339636664653430663730353061393535 Dec 12 17:23:06.756000 audit: BPF prog-id=237 op=LOAD Dec 12 17:23:06.756000 audit[4776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=4725 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396666633163663866386339636664653430663730353061393535 Dec 12 17:23:06.756000 audit: BPF prog-id=237 op=UNLOAD Dec 12 17:23:06.756000 audit[4776]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396666633163663866386339636664653430663730353061393535 Dec 12 17:23:06.757000 audit: BPF prog-id=236 op=UNLOAD Dec 12 17:23:06.757000 audit[4776]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4725 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396666633163663866386339636664653430663730353061393535 Dec 12 17:23:06.757000 audit: BPF prog-id=238 op=LOAD Dec 12 17:23:06.757000 audit[4776]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=4725 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362396666633163663866386339636664653430663730353061393535 Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.455 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0 calico-apiserver-8546bdfd97- calico-apiserver 814ad701-33db-45b4-b87d-3357a1210c6d 821 0 2025-12-12 17:22:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8546bdfd97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8546bdfd97-vmgm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d45b178ab1 [] [] }} ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-vmgm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.455 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-vmgm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.500 [INFO][4690] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" HandleID="k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Workload="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.500 [INFO][4690] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" HandleID="k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Workload="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400050eb50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8546bdfd97-vmgm5", "timestamp":"2025-12-12 17:23:06.500310543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.500 [INFO][4690] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.639 [INFO][4690] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.639 [INFO][4690] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.699 [INFO][4690] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.713 [INFO][4690] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.723 [INFO][4690] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.725 [INFO][4690] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.728 [INFO][4690] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.728 [INFO][4690] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.732 [INFO][4690] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89 Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.737 [INFO][4690] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.746 [INFO][4690] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.746 [INFO][4690] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" host="localhost" Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.746 [INFO][4690] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:23:06.775087 containerd[1581]: 2025-12-12 17:23:06.747 [INFO][4690] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" HandleID="k8s-pod-network.8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Workload="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" Dec 12 17:23:06.775593 containerd[1581]: 2025-12-12 17:23:06.749 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-vmgm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0", GenerateName:"calico-apiserver-8546bdfd97-", Namespace:"calico-apiserver", SelfLink:"", UID:"814ad701-33db-45b4-b87d-3357a1210c6d", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8546bdfd97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8546bdfd97-vmgm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d45b178ab1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:06.775593 containerd[1581]: 2025-12-12 17:23:06.749 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-vmgm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" Dec 12 17:23:06.775593 containerd[1581]: 2025-12-12 17:23:06.749 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d45b178ab1 ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-vmgm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" Dec 12 17:23:06.775593 containerd[1581]: 2025-12-12 17:23:06.752 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-vmgm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" Dec 12 17:23:06.775593 containerd[1581]: 2025-12-12 17:23:06.752 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-vmgm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0", GenerateName:"calico-apiserver-8546bdfd97-", Namespace:"calico-apiserver", SelfLink:"", UID:"814ad701-33db-45b4-b87d-3357a1210c6d", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8546bdfd97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89", Pod:"calico-apiserver-8546bdfd97-vmgm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d45b178ab1", MAC:"92:39:20:87:b2:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:06.775593 containerd[1581]: 2025-12-12 17:23:06.764 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" Namespace="calico-apiserver" Pod="calico-apiserver-8546bdfd97-vmgm5" WorkloadEndpoint="localhost-k8s-calico--apiserver--8546bdfd97--vmgm5-eth0" Dec 12 17:23:06.778000 audit: BPF prog-id=239 op=LOAD Dec 12 17:23:06.778000 audit: BPF prog-id=240 op=LOAD Dec 12 17:23:06.778000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4798 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661316237383339646630383035633363383933666238393932333438 Dec 12 17:23:06.778000 audit: BPF prog-id=240 op=UNLOAD Dec 12 17:23:06.778000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4798 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661316237383339646630383035633363383933666238393932333438 Dec 12 17:23:06.779000 audit: BPF prog-id=241 op=LOAD Dec 12 17:23:06.779000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4798 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661316237383339646630383035633363383933666238393932333438 Dec 12 17:23:06.779000 audit: BPF prog-id=242 op=LOAD Dec 12 17:23:06.779000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4798 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661316237383339646630383035633363383933666238393932333438 Dec 12 17:23:06.779000 audit: BPF prog-id=242 op=UNLOAD Dec 12 17:23:06.779000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4798 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661316237383339646630383035633363383933666238393932333438 Dec 12 17:23:06.779000 audit: BPF prog-id=241 op=UNLOAD Dec 12 17:23:06.779000 audit[4812]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4798 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661316237383339646630383035633363383933666238393932333438 Dec 12 17:23:06.779000 audit: BPF prog-id=243 op=LOAD Dec 12 17:23:06.779000 audit[4812]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4798 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661316237383339646630383035633363383933666238393932333438 Dec 12 17:23:06.783184 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:23:06.789050 containerd[1581]: time="2025-12-12T17:23:06.788943094Z" level=info msg="StartContainer for \"3b9ffc1cf8f8c9cfde40f7050a95579c1efee4d3c3b4f3188ae2a698e895a5c1\" returns successfully" Dec 12 17:23:06.790000 audit[4855]: NETFILTER_CFG table=filter:136 family=2 entries=53 op=nft_register_chain pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:06.790000 audit[4855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26624 a0=3 a1=ffffc8f528b0 a2=0 a3=ffff9fc98fa8 items=0 ppid=4365 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.790000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:06.802273 containerd[1581]: time="2025-12-12T17:23:06.802224993Z" level=info msg="connecting to shim 8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89" address="unix:///run/containerd/s/abf3a875a9d29156627d3daa2fbf4df49ef85f61ee8c76f6bd57296c4246e093" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:23:06.832279 systemd[1]: Started cri-containerd-8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89.scope - libcontainer container 8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89. Dec 12 17:23:06.847000 audit: BPF prog-id=244 op=LOAD Dec 12 17:23:06.847000 audit: BPF prog-id=245 op=LOAD Dec 12 17:23:06.847000 audit[4877]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4864 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663631366261366661633537373732333530373832376337613062 Dec 12 17:23:06.848000 audit: BPF prog-id=245 op=UNLOAD Dec 12 17:23:06.848000 audit[4877]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4864 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663631366261366661633537373732333530373832376337613062 Dec 12 17:23:06.848000 audit: BPF prog-id=246 op=LOAD Dec 12 17:23:06.848000 audit[4877]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4864 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663631366261366661633537373732333530373832376337613062 Dec 12 17:23:06.848000 audit: BPF prog-id=247 op=LOAD Dec 12 17:23:06.848000 audit[4877]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4864 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663631366261366661633537373732333530373832376337613062 Dec 12 17:23:06.848000 audit: BPF prog-id=247 op=UNLOAD Dec 12 17:23:06.848000 audit[4877]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4864 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663631366261366661633537373732333530373832376337613062 Dec 12 17:23:06.848000 audit: BPF prog-id=246 op=UNLOAD Dec 12 17:23:06.848000 audit[4877]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4864 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663631366261366661633537373732333530373832376337613062 Dec 12 17:23:06.848000 audit: BPF prog-id=248 op=LOAD Dec 12 17:23:06.848000 audit[4877]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4864 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:06.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663631366261366661633537373732333530373832376337613062 Dec 12 17:23:06.850297 containerd[1581]: time="2025-12-12T17:23:06.849957969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8546bdfd97-9gwkk,Uid:84025a67-a460-4acb-8e7e-d73c2b743a45,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6a1b7839df0805c3c893fb8992348559ad288e6d66a00ce9b9c8c72a1648df34\"" Dec 12 17:23:06.850384 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:23:06.853091 containerd[1581]: time="2025-12-12T17:23:06.852861487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:06.880900 containerd[1581]: time="2025-12-12T17:23:06.880850143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8546bdfd97-vmgm5,Uid:814ad701-33db-45b4-b87d-3357a1210c6d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8cf616ba6fac577723507827c7a0bf76cd0582387515c7f3136ff343b9768f89\"" Dec 12 17:23:07.063996 containerd[1581]: time="2025-12-12T17:23:07.063906310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:07.064919 containerd[1581]: time="2025-12-12T17:23:07.064842107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:07.064992 containerd[1581]: time="2025-12-12T17:23:07.064902509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:07.065203 kubelet[2721]: E1212 17:23:07.065167 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:07.065296 kubelet[2721]: E1212 17:23:07.065282 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:07.065641 kubelet[2721]: E1212 17:23:07.065572 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2fth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8546bdfd97-9gwkk_calico-apiserver(84025a67-a460-4acb-8e7e-d73c2b743a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:07.065978 containerd[1581]: time="2025-12-12T17:23:07.065740862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:07.066874 kubelet[2721]: E1212 17:23:07.066844 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" podUID="84025a67-a460-4acb-8e7e-d73c2b743a45" Dec 12 17:23:07.259355 containerd[1581]: time="2025-12-12T17:23:07.259309771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:07.260332 containerd[1581]: time="2025-12-12T17:23:07.260257649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:07.260332 containerd[1581]: time="2025-12-12T17:23:07.260297090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:07.260522 kubelet[2721]: E1212 17:23:07.260475 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:07.260592 kubelet[2721]: E1212 17:23:07.260526 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:07.260748 kubelet[2721]: E1212 17:23:07.260651 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vtf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8546bdfd97-vmgm5_calico-apiserver(814ad701-33db-45b4-b87d-3357a1210c6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:07.261829 kubelet[2721]: E1212 17:23:07.261792 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" podUID="814ad701-33db-45b4-b87d-3357a1210c6d" Dec 12 17:23:07.391947 containerd[1581]: time="2025-12-12T17:23:07.391819061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-47rgc,Uid:2e84fb14-216b-4920-bea5-4db1089bfe0c,Namespace:calico-system,Attempt:0,}" Dec 12 17:23:07.491787 systemd-networkd[1502]: calia10fcdcd61e: Link UP Dec 12 17:23:07.492562 systemd-networkd[1502]: calia10fcdcd61e: Gained carrier Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.428 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--47rgc-eth0 goldmane-666569f655- calico-system 2e84fb14-216b-4920-bea5-4db1089bfe0c 819 0 2025-12-12 17:22:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-47rgc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia10fcdcd61e [] [] }} ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Namespace="calico-system" Pod="goldmane-666569f655-47rgc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47rgc-" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.428 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Namespace="calico-system" Pod="goldmane-666569f655-47rgc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47rgc-eth0" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.452 [INFO][4930] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" HandleID="k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Workload="localhost-k8s-goldmane--666569f655--47rgc-eth0" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.453 [INFO][4930] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" HandleID="k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Workload="localhost-k8s-goldmane--666569f655--47rgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-47rgc", "timestamp":"2025-12-12 17:23:07.4528782 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.453 [INFO][4930] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.453 [INFO][4930] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.453 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.462 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.466 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.471 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.473 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.475 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.475 [INFO][4930] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.476 [INFO][4930] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.481 [INFO][4930] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.487 [INFO][4930] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.487 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" host="localhost" Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.487 [INFO][4930] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:23:07.508325 containerd[1581]: 2025-12-12 17:23:07.487 [INFO][4930] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" HandleID="k8s-pod-network.9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Workload="localhost-k8s-goldmane--666569f655--47rgc-eth0" Dec 12 17:23:07.508973 containerd[1581]: 2025-12-12 17:23:07.489 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Namespace="calico-system" Pod="goldmane-666569f655-47rgc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47rgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--47rgc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2e84fb14-216b-4920-bea5-4db1089bfe0c", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-47rgc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia10fcdcd61e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:07.508973 containerd[1581]: 2025-12-12 17:23:07.490 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Namespace="calico-system" Pod="goldmane-666569f655-47rgc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47rgc-eth0" Dec 12 17:23:07.508973 containerd[1581]: 2025-12-12 17:23:07.490 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia10fcdcd61e ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Namespace="calico-system" Pod="goldmane-666569f655-47rgc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47rgc-eth0" Dec 12 17:23:07.508973 containerd[1581]: 2025-12-12 17:23:07.492 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Namespace="calico-system" Pod="goldmane-666569f655-47rgc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47rgc-eth0" Dec 12 17:23:07.508973 containerd[1581]: 2025-12-12 17:23:07.492 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Namespace="calico-system" Pod="goldmane-666569f655-47rgc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47rgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--47rgc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2e84fb14-216b-4920-bea5-4db1089bfe0c", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 22, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e", Pod:"goldmane-666569f655-47rgc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia10fcdcd61e", MAC:"62:60:a2:ab:d9:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:23:07.508973 containerd[1581]: 2025-12-12 17:23:07.506 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" Namespace="calico-system" Pod="goldmane-666569f655-47rgc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--47rgc-eth0" Dec 12 17:23:07.523000 audit[4949]: NETFILTER_CFG table=filter:137 family=2 entries=70 op=nft_register_chain pid=4949 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:23:07.523000 audit[4949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=33956 a0=3 a1=ffffd4c832f0 a2=0 a3=ffff7f793fa8 items=0 ppid=4365 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.523000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:23:07.527854 containerd[1581]: time="2025-12-12T17:23:07.527815608Z" level=info msg="connecting to shim 9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e" address="unix:///run/containerd/s/41ab248747d50c4071d53efde4534c298d9e134a9942b99bd11f7de9f51de9ee" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:23:07.559300 systemd[1]: Started cri-containerd-9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e.scope - libcontainer container 9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e. Dec 12 17:23:07.569000 audit: BPF prog-id=249 op=LOAD Dec 12 17:23:07.570000 audit: BPF prog-id=250 op=LOAD Dec 12 17:23:07.570000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4957 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323265346539666364386462333130353431643134323632356438 Dec 12 17:23:07.570000 audit: BPF prog-id=250 op=UNLOAD Dec 12 17:23:07.570000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4957 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323265346539666364386462333130353431643134323632356438 Dec 12 17:23:07.570000 audit: BPF prog-id=251 op=LOAD Dec 12 17:23:07.570000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4957 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323265346539666364386462333130353431643134323632356438 Dec 12 17:23:07.570000 audit: BPF prog-id=252 op=LOAD Dec 12 17:23:07.570000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4957 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323265346539666364386462333130353431643134323632356438 Dec 12 17:23:07.570000 audit: BPF prog-id=252 op=UNLOAD Dec 12 17:23:07.570000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4957 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323265346539666364386462333130353431643134323632356438 Dec 12 17:23:07.570000 audit: BPF prog-id=251 op=UNLOAD Dec 12 17:23:07.570000 audit[4969]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4957 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323265346539666364386462333130353431643134323632356438 Dec 12 17:23:07.570000 audit: BPF prog-id=253 op=LOAD Dec 12 17:23:07.570000 audit[4969]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4957 pid=4969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323265346539666364386462333130353431643134323632356438 Dec 12 17:23:07.572451 systemd-resolved[1284]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:23:07.589724 kubelet[2721]: E1212 17:23:07.589673 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" podUID="814ad701-33db-45b4-b87d-3357a1210c6d" Dec 12 17:23:07.593258 kubelet[2721]: E1212 17:23:07.592950 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" podUID="84025a67-a460-4acb-8e7e-d73c2b743a45" Dec 12 17:23:07.598127 kubelet[2721]: E1212 17:23:07.597519 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:07.599261 kubelet[2721]: E1212 17:23:07.598719 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:07.620000 audit[4996]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:07.620000 audit[4996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe560d4c0 a2=0 a3=1 items=0 ppid=2830 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:07.626269 containerd[1581]: time="2025-12-12T17:23:07.626118783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-47rgc,Uid:2e84fb14-216b-4920-bea5-4db1089bfe0c,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d22e4e9fcd8db310541d142625d876e740a38a54ffd318ba90adf64449c454e\"" Dec 12 17:23:07.627000 audit[4996]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:07.627000 audit[4996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe560d4c0 a2=0 a3=1 items=0 ppid=2830 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:07.640368 containerd[1581]: time="2025-12-12T17:23:07.640257143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:23:07.656000 audit[4998]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:07.659525 kernel: kauditd_printk_skb: 436 callbacks suppressed Dec 12 17:23:07.659643 kernel: audit: type=1325 audit(1765560187.656:743): table=filter:140 family=2 entries=14 op=nft_register_rule pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:07.659673 kernel: audit: type=1300 audit(1765560187.656:743): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdae715b0 a2=0 a3=1 items=0 ppid=2830 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.656000 audit[4998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdae715b0 a2=0 a3=1 items=0 ppid=2830 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.656000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:07.664843 kernel: audit: type=1327 audit(1765560187.656:743): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:07.675000 audit[4998]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:07.675000 audit[4998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffdae715b0 a2=0 a3=1 items=0 ppid=2830 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.681166 kernel: audit: type=1325 audit(1765560187.675:744): table=nat:141 family=2 entries=56 op=nft_register_chain pid=4998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:07.681256 kernel: audit: type=1300 audit(1765560187.675:744): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffdae715b0 a2=0 a3=1 items=0 ppid=2830 pid=4998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:07.681278 kernel: audit: type=1327 audit(1765560187.675:744): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:07.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:07.696863 kubelet[2721]: I1212 17:23:07.696749 2721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m45nr" podStartSLOduration=40.69672734 podStartE2EDuration="40.69672734s" podCreationTimestamp="2025-12-12 17:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:23:07.638595917 +0000 UTC m=+46.349400215" watchObservedRunningTime="2025-12-12 17:23:07.69672734 +0000 UTC m=+46.407531558" Dec 12 17:23:07.848007 containerd[1581]: time="2025-12-12T17:23:07.847946931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:07.861138 containerd[1581]: time="2025-12-12T17:23:07.861079171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:23:07.861324 containerd[1581]: time="2025-12-12T17:23:07.861141414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:07.861354 kubelet[2721]: E1212 17:23:07.861320 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:23:07.861457 kubelet[2721]: E1212 17:23:07.861368 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:23:07.861457 kubelet[2721]: E1212 17:23:07.861521 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gp8zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-47rgc_calico-system(2e84fb14-216b-4920-bea5-4db1089bfe0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:07.863640 kubelet[2721]: E1212 17:23:07.863494 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47rgc" podUID="2e84fb14-216b-4920-bea5-4db1089bfe0c" Dec 12 17:23:07.870307 systemd-networkd[1502]: cali191cbcdd9cf: Gained IPv6LL Dec 12 17:23:08.062216 systemd-networkd[1502]: cali5d45b178ab1: Gained IPv6LL Dec 12 17:23:08.383198 systemd-networkd[1502]: calied201f9cda6: Gained IPv6LL Dec 12 17:23:08.510228 systemd-networkd[1502]: calia10fcdcd61e: Gained IPv6LL Dec 12 17:23:08.601377 kubelet[2721]: E1212 17:23:08.601315 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:08.602867 kubelet[2721]: E1212 17:23:08.602323 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47rgc" podUID="2e84fb14-216b-4920-bea5-4db1089bfe0c" Dec 12 17:23:08.602867 kubelet[2721]: E1212 17:23:08.602382 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" podUID="84025a67-a460-4acb-8e7e-d73c2b743a45" Dec 12 17:23:08.602867 kubelet[2721]: E1212 17:23:08.602571 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" podUID="814ad701-33db-45b4-b87d-3357a1210c6d" Dec 12 17:23:08.646000 audit[5001]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5001 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:08.646000 audit[5001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff4b8a3c0 a2=0 a3=1 items=0 ppid=2830 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:08.654232 kernel: audit: type=1325 audit(1765560188.646:745): table=filter:142 family=2 entries=14 op=nft_register_rule pid=5001 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:08.654350 kernel: audit: type=1300 audit(1765560188.646:745): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff4b8a3c0 a2=0 a3=1 items=0 ppid=2830 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:08.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:08.655972 kernel: audit: type=1327 audit(1765560188.646:745): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:08.656021 kernel: audit: type=1325 audit(1765560188.654:746): table=nat:143 family=2 entries=20 op=nft_register_rule pid=5001 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:08.654000 audit[5001]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5001 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:08.654000 audit[5001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff4b8a3c0 a2=0 a3=1 items=0 ppid=2830 pid=5001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:08.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:09.605579 kubelet[2721]: E1212 17:23:09.605526 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47rgc" podUID="2e84fb14-216b-4920-bea5-4db1089bfe0c" Dec 12 17:23:09.606001 kubelet[2721]: E1212 17:23:09.605949 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:10.102373 kubelet[2721]: I1212 17:23:10.102313 2721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:23:10.102818 kubelet[2721]: E1212 17:23:10.102776 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:10.606676 kubelet[2721]: E1212 17:23:10.606627 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:10.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.37:22-10.0.0.1:38420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:10.809712 systemd[1]: Started sshd@9-10.0.0.37:22-10.0.0.1:38420.service - OpenSSH per-connection server daemon (10.0.0.1:38420). Dec 12 17:23:10.874000 audit[5062]: USER_ACCT pid=5062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:10.876306 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 38420 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:10.876000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:10.876000 audit[5062]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd602500 a2=3 a3=0 items=0 ppid=1 pid=5062 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:10.876000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:10.878406 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:10.883457 systemd-logind[1562]: New session 10 of user core. Dec 12 17:23:10.890292 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:23:10.892000 audit[5062]: USER_START pid=5062 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:10.894000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.044719 sshd[5065]: Connection closed by 10.0.0.1 port 38420 Dec 12 17:23:11.045126 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:11.046000 audit[5062]: USER_END pid=5062 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.046000 audit[5062]: CRED_DISP pid=5062 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.055926 systemd[1]: sshd@9-10.0.0.37:22-10.0.0.1:38420.service: Deactivated successfully. Dec 12 17:23:11.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.37:22-10.0.0.1:38420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:11.058604 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:23:11.060993 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:23:11.062693 systemd-logind[1562]: Removed session 10. Dec 12 17:23:11.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.37:22-10.0.0.1:47936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:11.064637 systemd[1]: Started sshd@10-10.0.0.37:22-10.0.0.1:47936.service - OpenSSH per-connection server daemon (10.0.0.1:47936). Dec 12 17:23:11.132000 audit[5079]: USER_ACCT pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.133741 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 47936 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:11.133000 audit[5079]: CRED_ACQ pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.134000 audit[5079]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd285e320 a2=3 a3=0 items=0 ppid=1 pid=5079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:11.134000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:11.135443 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:11.140012 systemd-logind[1562]: New session 11 of user core. Dec 12 17:23:11.147274 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:23:11.148000 audit[5079]: USER_START pid=5079 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.150000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.284195 sshd[5082]: Connection closed by 10.0.0.1 port 47936 Dec 12 17:23:11.285453 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:11.288000 audit[5079]: USER_END pid=5079 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.288000 audit[5079]: CRED_DISP pid=5079 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.300376 systemd[1]: sshd@10-10.0.0.37:22-10.0.0.1:47936.service: Deactivated successfully. Dec 12 17:23:11.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.37:22-10.0.0.1:47936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:11.305549 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:23:11.309892 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:23:11.314810 systemd-logind[1562]: Removed session 11. Dec 12 17:23:11.316193 systemd[1]: Started sshd@11-10.0.0.37:22-10.0.0.1:47948.service - OpenSSH per-connection server daemon (10.0.0.1:47948). Dec 12 17:23:11.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.37:22-10.0.0.1:47948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:11.378000 audit[5095]: USER_ACCT pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.379352 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 47948 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:11.379000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.379000 audit[5095]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffadbb690 a2=3 a3=0 items=0 ppid=1 pid=5095 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:11.379000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:11.381091 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:11.386134 systemd-logind[1562]: New session 12 of user core. Dec 12 17:23:11.396300 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:23:11.397000 audit[5095]: USER_START pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.399000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.523531 sshd[5098]: Connection closed by 10.0.0.1 port 47948 Dec 12 17:23:11.523946 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:11.524000 audit[5095]: USER_END pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.524000 audit[5095]: CRED_DISP pid=5095 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:11.528556 systemd[1]: sshd@11-10.0.0.37:22-10.0.0.1:47948.service: Deactivated successfully. Dec 12 17:23:11.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.37:22-10.0.0.1:47948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:11.530817 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:23:11.531724 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:23:11.532814 systemd-logind[1562]: Removed session 12. Dec 12 17:23:13.390848 containerd[1581]: time="2025-12-12T17:23:13.390779751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:23:13.610923 containerd[1581]: time="2025-12-12T17:23:13.610854857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:13.612735 containerd[1581]: time="2025-12-12T17:23:13.612670520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:23:13.612872 containerd[1581]: time="2025-12-12T17:23:13.612755803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:13.612902 kubelet[2721]: E1212 17:23:13.612845 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:23:13.612902 kubelet[2721]: E1212 17:23:13.612884 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:23:13.613273 kubelet[2721]: E1212 17:23:13.613010 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:16fbc09b741f4bad9a7e46252a777791,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vfs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c6bf566f-r6z8g_calico-system(b80b51cf-55e4-4f30-8272-528fb61d0936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:13.617761 containerd[1581]: time="2025-12-12T17:23:13.616888108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:23:13.812249 containerd[1581]: time="2025-12-12T17:23:13.812206427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:13.813180 containerd[1581]: time="2025-12-12T17:23:13.813142699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:23:13.813263 containerd[1581]: time="2025-12-12T17:23:13.813208422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:13.813420 kubelet[2721]: E1212 17:23:13.813364 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:23:13.813480 kubelet[2721]: E1212 17:23:13.813422 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:23:13.813626 kubelet[2721]: E1212 17:23:13.813589 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vfs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c6bf566f-r6z8g_calico-system(b80b51cf-55e4-4f30-8272-528fb61d0936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:13.814799 kubelet[2721]: E1212 17:23:13.814750 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c6bf566f-r6z8g" podUID="b80b51cf-55e4-4f30-8272-528fb61d0936" Dec 12 17:23:16.539321 systemd[1]: Started sshd@12-10.0.0.37:22-10.0.0.1:47952.service - OpenSSH per-connection server daemon (10.0.0.1:47952). Dec 12 17:23:16.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.37:22-10.0.0.1:47952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:16.540444 kernel: kauditd_printk_skb: 35 callbacks suppressed Dec 12 17:23:16.540500 kernel: audit: type=1130 audit(1765560196.538:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.37:22-10.0.0.1:47952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:16.604000 audit[5125]: USER_ACCT pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.606200 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 47952 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:16.609368 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:16.607000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.613068 kernel: audit: type=1101 audit(1765560196.604:775): pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.613139 kernel: audit: type=1103 audit(1765560196.607:776): pid=5125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.615720 kernel: audit: type=1006 audit(1765560196.607:777): pid=5125 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:23:16.615790 kernel: audit: type=1300 audit(1765560196.607:777): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeac590b0 a2=3 a3=0 items=0 ppid=1 pid=5125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:16.607000 audit[5125]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeac590b0 a2=3 a3=0 items=0 ppid=1 pid=5125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:16.619138 kernel: audit: type=1327 audit(1765560196.607:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:16.607000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:16.618080 systemd-logind[1562]: New session 13 of user core. Dec 12 17:23:16.628325 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:23:16.630000 audit[5125]: USER_START pid=5125 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.636058 kernel: audit: type=1105 audit(1765560196.630:778): pid=5125 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.637000 audit[5128]: CRED_ACQ pid=5128 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.642081 kernel: audit: type=1103 audit(1765560196.637:779): pid=5128 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.723432 sshd[5128]: Connection closed by 10.0.0.1 port 47952 Dec 12 17:23:16.723812 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:16.724000 audit[5125]: USER_END pid=5125 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.728374 systemd[1]: sshd@12-10.0.0.37:22-10.0.0.1:47952.service: Deactivated successfully. Dec 12 17:23:16.724000 audit[5125]: CRED_DISP pid=5125 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.731211 kernel: audit: type=1106 audit(1765560196.724:780): pid=5125 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.731278 kernel: audit: type=1104 audit(1765560196.724:781): pid=5125 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:16.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.37:22-10.0.0.1:47952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:16.733150 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:23:16.735867 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:23:16.737644 systemd-logind[1562]: Removed session 13. Dec 12 17:23:17.390574 containerd[1581]: time="2025-12-12T17:23:17.390512309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:23:17.589657 containerd[1581]: time="2025-12-12T17:23:17.589592838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:17.590966 containerd[1581]: time="2025-12-12T17:23:17.590870040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:23:17.590966 containerd[1581]: time="2025-12-12T17:23:17.590907761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:17.591565 kubelet[2721]: E1212 17:23:17.591416 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:17.591565 kubelet[2721]: E1212 17:23:17.591560 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:17.591887 kubelet[2721]: E1212 17:23:17.591777 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhrc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9d446d548-w7qwm_calico-system(2fb0c1c5-10e8-4dec-a57c-dc20c81a6882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:17.593091 kubelet[2721]: E1212 17:23:17.593014 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" podUID="2fb0c1c5-10e8-4dec-a57c-dc20c81a6882" Dec 12 17:23:20.393216 containerd[1581]: time="2025-12-12T17:23:20.393123646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:23:20.581012 containerd[1581]: time="2025-12-12T17:23:20.580935117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:20.582372 containerd[1581]: time="2025-12-12T17:23:20.582240599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:23:20.582372 containerd[1581]: time="2025-12-12T17:23:20.582316601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:20.582668 kubelet[2721]: E1212 17:23:20.582562 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:20.582975 kubelet[2721]: E1212 17:23:20.582675 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:20.582975 kubelet[2721]: E1212 17:23:20.582896 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8stxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7zbf_calico-system(1e0e30e4-6fdf-475c-9a4a-59287d927d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:20.584451 containerd[1581]: time="2025-12-12T17:23:20.583709485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:20.790559 containerd[1581]: time="2025-12-12T17:23:20.790233465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:20.792828 containerd[1581]: time="2025-12-12T17:23:20.791803794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:20.792828 containerd[1581]: time="2025-12-12T17:23:20.791886037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:20.792828 containerd[1581]: time="2025-12-12T17:23:20.792500536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:23:20.793001 kubelet[2721]: E1212 17:23:20.792075 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:20.793001 kubelet[2721]: E1212 17:23:20.792130 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:20.793001 kubelet[2721]: E1212 17:23:20.792403 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vtf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8546bdfd97-vmgm5_calico-apiserver(814ad701-33db-45b4-b87d-3357a1210c6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:20.793937 kubelet[2721]: E1212 17:23:20.793888 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" podUID="814ad701-33db-45b4-b87d-3357a1210c6d" Dec 12 17:23:20.995426 containerd[1581]: time="2025-12-12T17:23:20.995353600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:20.996766 containerd[1581]: time="2025-12-12T17:23:20.996628441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:23:20.996766 containerd[1581]: time="2025-12-12T17:23:20.996710083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:20.997155 kubelet[2721]: E1212 17:23:20.996954 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:20.997155 kubelet[2721]: E1212 17:23:20.997012 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:20.998775 kubelet[2721]: E1212 17:23:20.997206 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8stxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7zbf_calico-system(1e0e30e4-6fdf-475c-9a4a-59287d927d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:20.999201 kubelet[2721]: E1212 17:23:20.999093 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:23:21.390709 containerd[1581]: time="2025-12-12T17:23:21.390624050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:23:21.602473 containerd[1581]: time="2025-12-12T17:23:21.602238548Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:21.604433 containerd[1581]: time="2025-12-12T17:23:21.603590750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:23:21.604433 containerd[1581]: time="2025-12-12T17:23:21.603635472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:21.604589 kubelet[2721]: E1212 17:23:21.603802 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:23:21.604589 kubelet[2721]: E1212 17:23:21.603849 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:23:21.604589 kubelet[2721]: E1212 17:23:21.603974 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gp8zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-47rgc_calico-system(2e84fb14-216b-4920-bea5-4db1089bfe0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:21.605764 kubelet[2721]: E1212 17:23:21.605506 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47rgc" podUID="2e84fb14-216b-4920-bea5-4db1089bfe0c" Dec 12 17:23:21.753061 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:23:21.753143 kernel: audit: type=1130 audit(1765560201.750:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.37:22-10.0.0.1:52578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:21.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.37:22-10.0.0.1:52578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:21.751420 systemd[1]: Started sshd@13-10.0.0.37:22-10.0.0.1:52578.service - OpenSSH per-connection server daemon (10.0.0.1:52578). Dec 12 17:23:21.840000 audit[5147]: USER_ACCT pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:21.841299 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 52578 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:21.843000 audit[5147]: CRED_ACQ pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:21.845216 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:21.847664 kernel: audit: type=1101 audit(1765560201.840:784): pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:21.847730 kernel: audit: type=1103 audit(1765560201.843:785): pid=5147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:21.847747 kernel: audit: type=1006 audit(1765560201.843:786): pid=5147 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 17:23:21.843000 audit[5147]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc48582d0 a2=3 a3=0 items=0 ppid=1 pid=5147 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:21.854594 kernel: audit: type=1300 audit(1765560201.843:786): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc48582d0 a2=3 a3=0 items=0 ppid=1 pid=5147 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:21.854667 kernel: audit: type=1327 audit(1765560201.843:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:21.843000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:21.854557 systemd-logind[1562]: New session 14 of user core. Dec 12 17:23:21.861275 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:23:21.865000 audit[5147]: USER_START pid=5147 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:21.866000 audit[5150]: CRED_ACQ pid=5150 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:21.872942 kernel: audit: type=1105 audit(1765560201.865:787): pid=5147 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:21.873079 kernel: audit: type=1103 audit(1765560201.866:788): pid=5150 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:22.003854 sshd[5150]: Connection closed by 10.0.0.1 port 52578 Dec 12 17:23:22.004720 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:22.004000 audit[5147]: USER_END pid=5147 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:22.008804 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:23:22.009085 systemd[1]: sshd@13-10.0.0.37:22-10.0.0.1:52578.service: Deactivated successfully. Dec 12 17:23:22.004000 audit[5147]: CRED_DISP pid=5147 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:22.010826 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:23:22.012460 kernel: audit: type=1106 audit(1765560202.004:789): pid=5147 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:22.012533 kernel: audit: type=1104 audit(1765560202.004:790): pid=5147 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:22.012949 systemd-logind[1562]: Removed session 14. Dec 12 17:23:22.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.37:22-10.0.0.1:52578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:23.391027 containerd[1581]: time="2025-12-12T17:23:23.390927435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:23.585515 containerd[1581]: time="2025-12-12T17:23:23.585464505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:23.586582 containerd[1581]: time="2025-12-12T17:23:23.586507897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:23.586660 containerd[1581]: time="2025-12-12T17:23:23.586565539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:23.586836 kubelet[2721]: E1212 17:23:23.586791 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:23.587253 kubelet[2721]: E1212 17:23:23.586849 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:23.587253 kubelet[2721]: E1212 17:23:23.586976 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2fth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8546bdfd97-9gwkk_calico-apiserver(84025a67-a460-4acb-8e7e-d73c2b743a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:23.588492 kubelet[2721]: E1212 17:23:23.588441 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" podUID="84025a67-a460-4acb-8e7e-d73c2b743a45" Dec 12 17:23:26.392335 kubelet[2721]: E1212 17:23:26.392256 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c6bf566f-r6z8g" podUID="b80b51cf-55e4-4f30-8272-528fb61d0936" Dec 12 17:23:27.015970 systemd[1]: Started sshd@14-10.0.0.37:22-10.0.0.1:52582.service - OpenSSH per-connection server daemon (10.0.0.1:52582). Dec 12 17:23:27.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.37:22-10.0.0.1:52582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:27.019097 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:23:27.019187 kernel: audit: type=1130 audit(1765560207.015:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.37:22-10.0.0.1:52582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:27.089000 audit[5169]: USER_ACCT pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.090349 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 52582 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:27.097800 kernel: audit: type=1101 audit(1765560207.089:793): pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.097914 kernel: audit: type=1103 audit(1765560207.091:794): pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.091000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.094397 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:27.101062 kernel: audit: type=1006 audit(1765560207.091:795): pid=5169 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 12 17:23:27.091000 audit[5169]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeecb8820 a2=3 a3=0 items=0 ppid=1 pid=5169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:27.107000 kernel: audit: type=1300 audit(1765560207.091:795): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeecb8820 a2=3 a3=0 items=0 ppid=1 pid=5169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:27.107098 kernel: audit: type=1327 audit(1765560207.091:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:27.091000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:27.110227 systemd-logind[1562]: New session 15 of user core. Dec 12 17:23:27.120284 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:23:27.123000 audit[5169]: USER_START pid=5169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.124000 audit[5172]: CRED_ACQ pid=5172 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.130983 kernel: audit: type=1105 audit(1765560207.123:796): pid=5169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.131072 kernel: audit: type=1103 audit(1765560207.124:797): pid=5172 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.330175 sshd[5172]: Connection closed by 10.0.0.1 port 52582 Dec 12 17:23:27.331014 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:27.333000 audit[5169]: USER_END pid=5169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.337107 systemd[1]: sshd@14-10.0.0.37:22-10.0.0.1:52582.service: Deactivated successfully. Dec 12 17:23:27.333000 audit[5169]: CRED_DISP pid=5169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.341087 kernel: audit: type=1106 audit(1765560207.333:798): pid=5169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.341176 kernel: audit: type=1104 audit(1765560207.333:799): pid=5169 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:27.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.37:22-10.0.0.1:52582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:27.339418 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:23:27.341588 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:23:27.344557 systemd-logind[1562]: Removed session 15. Dec 12 17:23:29.391202 kubelet[2721]: E1212 17:23:29.391151 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" podUID="2fb0c1c5-10e8-4dec-a57c-dc20c81a6882" Dec 12 17:23:32.344844 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:23:32.344969 kernel: audit: type=1130 audit(1765560212.342:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.37:22-10.0.0.1:60730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:32.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.37:22-10.0.0.1:60730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:32.343378 systemd[1]: Started sshd@15-10.0.0.37:22-10.0.0.1:60730.service - OpenSSH per-connection server daemon (10.0.0.1:60730). Dec 12 17:23:32.415000 audit[5189]: USER_ACCT pid=5189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.419788 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 60730 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:32.420157 kernel: audit: type=1101 audit(1765560212.415:802): pid=5189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.419000 audit[5189]: CRED_ACQ pid=5189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.421413 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:32.425372 kernel: audit: type=1103 audit(1765560212.419:803): pid=5189 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.425441 kernel: audit: type=1006 audit(1765560212.420:804): pid=5189 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 12 17:23:32.425461 kernel: audit: type=1300 audit(1765560212.420:804): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6658d00 a2=3 a3=0 items=0 ppid=1 pid=5189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:32.420000 audit[5189]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe6658d00 a2=3 a3=0 items=0 ppid=1 pid=5189 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:32.420000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:32.428303 systemd-logind[1562]: New session 16 of user core. Dec 12 17:23:32.429279 kernel: audit: type=1327 audit(1765560212.420:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:32.439286 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:23:32.441000 audit[5189]: USER_START pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.443000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.448080 kernel: audit: type=1105 audit(1765560212.441:805): pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.448176 kernel: audit: type=1103 audit(1765560212.443:806): pid=5192 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.610688 sshd[5192]: Connection closed by 10.0.0.1 port 60730 Dec 12 17:23:32.611910 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:32.612000 audit[5189]: USER_END pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.612000 audit[5189]: CRED_DISP pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.618787 kernel: audit: type=1106 audit(1765560212.612:807): pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.618887 kernel: audit: type=1104 audit(1765560212.612:808): pid=5189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.625886 systemd[1]: sshd@15-10.0.0.37:22-10.0.0.1:60730.service: Deactivated successfully. Dec 12 17:23:32.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.37:22-10.0.0.1:60730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:32.629898 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:23:32.631678 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:23:32.633892 systemd-logind[1562]: Removed session 16. Dec 12 17:23:32.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.37:22-10.0.0.1:60740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:32.637815 systemd[1]: Started sshd@16-10.0.0.37:22-10.0.0.1:60740.service - OpenSSH per-connection server daemon (10.0.0.1:60740). Dec 12 17:23:32.711000 audit[5207]: USER_ACCT pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.712291 sshd[5207]: Accepted publickey for core from 10.0.0.1 port 60740 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:32.712000 audit[5207]: CRED_ACQ pid=5207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.712000 audit[5207]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2d53aa0 a2=3 a3=0 items=0 ppid=1 pid=5207 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:32.712000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:32.713719 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:32.718962 systemd-logind[1562]: New session 17 of user core. Dec 12 17:23:32.728339 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:23:32.731000 audit[5207]: USER_START pid=5207 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.733000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.916966 sshd[5210]: Connection closed by 10.0.0.1 port 60740 Dec 12 17:23:32.917708 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:32.918000 audit[5207]: USER_END pid=5207 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.918000 audit[5207]: CRED_DISP pid=5207 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:32.926781 systemd[1]: sshd@16-10.0.0.37:22-10.0.0.1:60740.service: Deactivated successfully. Dec 12 17:23:32.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.37:22-10.0.0.1:60740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:32.930994 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:23:32.932573 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:23:32.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.37:22-10.0.0.1:60756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:32.935629 systemd[1]: Started sshd@17-10.0.0.37:22-10.0.0.1:60756.service - OpenSSH per-connection server daemon (10.0.0.1:60756). Dec 12 17:23:32.936532 systemd-logind[1562]: Removed session 17. Dec 12 17:23:33.004000 audit[5223]: USER_ACCT pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.005867 sshd[5223]: Accepted publickey for core from 10.0.0.1 port 60756 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:33.006000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.006000 audit[5223]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7100ac0 a2=3 a3=0 items=0 ppid=1 pid=5223 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:33.006000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:33.007700 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:33.012136 systemd-logind[1562]: New session 18 of user core. Dec 12 17:23:33.021308 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:23:33.022000 audit[5223]: USER_START pid=5223 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.024000 audit[5226]: CRED_ACQ pid=5226 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.390058 kubelet[2721]: E1212 17:23:33.389357 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:33.391115 kubelet[2721]: E1212 17:23:33.391082 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47rgc" podUID="2e84fb14-216b-4920-bea5-4db1089bfe0c" Dec 12 17:23:33.658000 audit[5240]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:33.658000 audit[5240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdc0503e0 a2=0 a3=1 items=0 ppid=2830 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:33.658000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:33.668000 audit[5240]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5240 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:33.668000 audit[5240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdc0503e0 a2=0 a3=1 items=0 ppid=2830 pid=5240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:33.668000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:33.686000 audit[5242]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:33.686000 audit[5242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd458dfd0 a2=0 a3=1 items=0 ppid=2830 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:33.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:33.694000 audit[5242]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5242 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:33.697112 sshd[5226]: Connection closed by 10.0.0.1 port 60756 Dec 12 17:23:33.697254 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:33.699000 audit[5223]: USER_END pid=5223 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.699000 audit[5223]: CRED_DISP pid=5223 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.694000 audit[5242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd458dfd0 a2=0 a3=1 items=0 ppid=2830 pid=5242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:33.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:33.708231 systemd[1]: sshd@17-10.0.0.37:22-10.0.0.1:60756.service: Deactivated successfully. Dec 12 17:23:33.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.37:22-10.0.0.1:60756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:33.712874 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:23:33.714821 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:23:33.719237 systemd[1]: Started sshd@18-10.0.0.37:22-10.0.0.1:60762.service - OpenSSH per-connection server daemon (10.0.0.1:60762). Dec 12 17:23:33.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.37:22-10.0.0.1:60762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:33.720672 systemd-logind[1562]: Removed session 18. Dec 12 17:23:33.783000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.784532 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 60762 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:33.784000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.784000 audit[5247]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8516b00 a2=3 a3=0 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:33.784000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:33.785719 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:33.790637 systemd-logind[1562]: New session 19 of user core. Dec 12 17:23:33.815307 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:23:33.817000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:33.819000 audit[5250]: CRED_ACQ pid=5250 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.099633 sshd[5250]: Connection closed by 10.0.0.1 port 60762 Dec 12 17:23:34.100351 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:34.101000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.101000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.111251 systemd[1]: sshd@18-10.0.0.37:22-10.0.0.1:60762.service: Deactivated successfully. Dec 12 17:23:34.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.37:22-10.0.0.1:60762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:34.114026 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:23:34.115283 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:23:34.118972 systemd[1]: Started sshd@19-10.0.0.37:22-10.0.0.1:60766.service - OpenSSH per-connection server daemon (10.0.0.1:60766). Dec 12 17:23:34.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.37:22-10.0.0.1:60766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:34.121151 systemd-logind[1562]: Removed session 19. Dec 12 17:23:34.195000 audit[5261]: USER_ACCT pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.197478 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 60766 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:34.197000 audit[5261]: CRED_ACQ pid=5261 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.197000 audit[5261]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb06e090 a2=3 a3=0 items=0 ppid=1 pid=5261 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:34.197000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:34.198725 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:34.205769 systemd-logind[1562]: New session 20 of user core. Dec 12 17:23:34.213406 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:23:34.216000 audit[5261]: USER_START pid=5261 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.219000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.322708 sshd[5264]: Connection closed by 10.0.0.1 port 60766 Dec 12 17:23:34.323335 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:34.323000 audit[5261]: USER_END pid=5261 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.323000 audit[5261]: CRED_DISP pid=5261 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:34.329079 systemd[1]: sshd@19-10.0.0.37:22-10.0.0.1:60766.service: Deactivated successfully. Dec 12 17:23:34.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.37:22-10.0.0.1:60766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:34.334521 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:23:34.336658 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:23:34.338470 systemd-logind[1562]: Removed session 20. Dec 12 17:23:34.389875 kubelet[2721]: E1212 17:23:34.389758 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" podUID="84025a67-a460-4acb-8e7e-d73c2b743a45" Dec 12 17:23:35.389776 kubelet[2721]: E1212 17:23:35.389515 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:35.391310 kubelet[2721]: E1212 17:23:35.391237 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" podUID="814ad701-33db-45b4-b87d-3357a1210c6d" Dec 12 17:23:36.395997 kubelet[2721]: E1212 17:23:36.390870 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:23:37.391221 containerd[1581]: time="2025-12-12T17:23:37.391177212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:23:37.587363 containerd[1581]: time="2025-12-12T17:23:37.587269703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:37.588306 containerd[1581]: time="2025-12-12T17:23:37.588270171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:23:37.588472 containerd[1581]: time="2025-12-12T17:23:37.588347890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:37.588544 kubelet[2721]: E1212 17:23:37.588504 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:23:37.588932 kubelet[2721]: E1212 17:23:37.588556 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:23:37.588932 kubelet[2721]: E1212 17:23:37.588664 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:16fbc09b741f4bad9a7e46252a777791,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vfs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c6bf566f-r6z8g_calico-system(b80b51cf-55e4-4f30-8272-528fb61d0936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:37.590806 containerd[1581]: time="2025-12-12T17:23:37.590775581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:23:37.816811 containerd[1581]: time="2025-12-12T17:23:37.816737434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:37.817725 containerd[1581]: time="2025-12-12T17:23:37.817683823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:23:37.817807 containerd[1581]: time="2025-12-12T17:23:37.817781742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:37.818061 kubelet[2721]: E1212 17:23:37.817950 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:23:37.818227 kubelet[2721]: E1212 17:23:37.818022 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:23:37.818366 kubelet[2721]: E1212 17:23:37.818329 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vfs7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c6bf566f-r6z8g_calico-system(b80b51cf-55e4-4f30-8272-528fb61d0936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:37.819757 kubelet[2721]: E1212 17:23:37.819706 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c6bf566f-r6z8g" podUID="b80b51cf-55e4-4f30-8272-528fb61d0936" Dec 12 17:23:38.208000 audit[5278]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:38.211672 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 12 17:23:38.211758 kernel: audit: type=1325 audit(1765560218.208:850): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:38.211780 kernel: audit: type=1300 audit(1765560218.208:850): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9e35eb0 a2=0 a3=1 items=0 ppid=2830 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:38.208000 audit[5278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff9e35eb0 a2=0 a3=1 items=0 ppid=2830 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:38.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:38.216555 kernel: audit: type=1327 audit(1765560218.208:850): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:38.218000 audit[5278]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:38.218000 audit[5278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff9e35eb0 a2=0 a3=1 items=0 ppid=2830 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:38.225113 kernel: audit: type=1325 audit(1765560218.218:851): table=nat:149 family=2 entries=104 op=nft_register_chain pid=5278 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:23:38.225165 kernel: audit: type=1300 audit(1765560218.218:851): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff9e35eb0 a2=0 a3=1 items=0 ppid=2830 pid=5278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:38.225182 kernel: audit: type=1327 audit(1765560218.218:851): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:38.218000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:23:38.388960 kubelet[2721]: E1212 17:23:38.388918 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:39.339214 systemd[1]: Started sshd@20-10.0.0.37:22-10.0.0.1:60770.service - OpenSSH per-connection server daemon (10.0.0.1:60770). Dec 12 17:23:39.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.37:22-10.0.0.1:60770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:39.343073 kernel: audit: type=1130 audit(1765560219.338:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.37:22-10.0.0.1:60770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:39.411479 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 60770 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:39.410000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:39.414000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:39.415945 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:39.418056 kernel: audit: type=1101 audit(1765560219.410:853): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:39.418124 kernel: audit: type=1103 audit(1765560219.414:854): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:39.418144 kernel: audit: type=1006 audit(1765560219.414:855): pid=5280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 17:23:39.414000 audit[5280]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd026ede0 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:39.414000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:39.420479 systemd-logind[1562]: New session 21 of user core. Dec 12 17:23:39.430231 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:23:39.431000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:39.433000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:39.530741 sshd[5283]: Connection closed by 10.0.0.1 port 60770 Dec 12 17:23:39.531454 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:39.532000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:39.532000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:39.536299 systemd[1]: sshd@20-10.0.0.37:22-10.0.0.1:60770.service: Deactivated successfully. Dec 12 17:23:39.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.37:22-10.0.0.1:60770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:39.538508 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:23:39.539436 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:23:39.541869 systemd-logind[1562]: Removed session 21. Dec 12 17:23:44.390867 containerd[1581]: time="2025-12-12T17:23:44.390797516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:23:44.546677 systemd[1]: Started sshd@21-10.0.0.37:22-10.0.0.1:50018.service - OpenSSH per-connection server daemon (10.0.0.1:50018). Dec 12 17:23:44.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.37:22-10.0.0.1:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:44.550205 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 12 17:23:44.550274 kernel: audit: type=1130 audit(1765560224.546:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.37:22-10.0.0.1:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:44.605885 containerd[1581]: time="2025-12-12T17:23:44.605704847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:44.606872 containerd[1581]: time="2025-12-12T17:23:44.606728442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:23:44.606872 containerd[1581]: time="2025-12-12T17:23:44.606818841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:44.607080 kubelet[2721]: E1212 17:23:44.607015 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:23:44.607371 kubelet[2721]: E1212 17:23:44.607089 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:23:44.607537 containerd[1581]: time="2025-12-12T17:23:44.607511517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:23:44.608210 kubelet[2721]: E1212 17:23:44.608142 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gp8zz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-47rgc_calico-system(2e84fb14-216b-4920-bea5-4db1089bfe0c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:44.609505 kubelet[2721]: E1212 17:23:44.609408 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-47rgc" podUID="2e84fb14-216b-4920-bea5-4db1089bfe0c" Dec 12 17:23:44.626000 audit[5329]: USER_ACCT pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.627831 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 50018 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:44.631077 kernel: audit: type=1101 audit(1765560224.626:862): pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.630000 audit[5329]: CRED_ACQ pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.632454 sshd-session[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:44.636903 kernel: audit: type=1103 audit(1765560224.630:863): pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.637142 kernel: audit: type=1006 audit(1765560224.631:864): pid=5329 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 17:23:44.638265 kernel: audit: type=1300 audit(1765560224.631:864): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd5021400 a2=3 a3=0 items=0 ppid=1 pid=5329 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:44.631000 audit[5329]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd5021400 a2=3 a3=0 items=0 ppid=1 pid=5329 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:44.638753 systemd-logind[1562]: New session 22 of user core. Dec 12 17:23:44.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:44.641432 kernel: audit: type=1327 audit(1765560224.631:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:44.646301 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:23:44.647000 audit[5329]: USER_START pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.650000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.654811 kernel: audit: type=1105 audit(1765560224.647:865): pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.654900 kernel: audit: type=1103 audit(1765560224.650:866): pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.739003 sshd[5332]: Connection closed by 10.0.0.1 port 50018 Dec 12 17:23:44.739470 sshd-session[5329]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:44.740000 audit[5329]: USER_END pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.743942 systemd[1]: sshd@21-10.0.0.37:22-10.0.0.1:50018.service: Deactivated successfully. Dec 12 17:23:44.740000 audit[5329]: CRED_DISP pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.745818 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:23:44.746562 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:23:44.746739 kernel: audit: type=1106 audit(1765560224.740:867): pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.746775 kernel: audit: type=1104 audit(1765560224.740:868): pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:44.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.37:22-10.0.0.1:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:44.747531 systemd-logind[1562]: Removed session 22. Dec 12 17:23:44.794515 containerd[1581]: time="2025-12-12T17:23:44.794463963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:44.795446 containerd[1581]: time="2025-12-12T17:23:44.795409638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:23:44.795502 containerd[1581]: time="2025-12-12T17:23:44.795441118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:44.795864 kubelet[2721]: E1212 17:23:44.795627 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:44.795864 kubelet[2721]: E1212 17:23:44.795680 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:23:44.795864 kubelet[2721]: E1212 17:23:44.795809 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhrc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9d446d548-w7qwm_calico-system(2fb0c1c5-10e8-4dec-a57c-dc20c81a6882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:44.797320 kubelet[2721]: E1212 17:23:44.797265 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9d446d548-w7qwm" podUID="2fb0c1c5-10e8-4dec-a57c-dc20c81a6882" Dec 12 17:23:46.390921 containerd[1581]: time="2025-12-12T17:23:46.390850842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:46.586528 containerd[1581]: time="2025-12-12T17:23:46.586333952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:46.587659 containerd[1581]: time="2025-12-12T17:23:46.587615427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:46.587855 containerd[1581]: time="2025-12-12T17:23:46.587686947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:46.587990 kubelet[2721]: E1212 17:23:46.587948 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:46.588287 kubelet[2721]: E1212 17:23:46.588002 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:46.588287 kubelet[2721]: E1212 17:23:46.588201 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vtf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8546bdfd97-vmgm5_calico-apiserver(814ad701-33db-45b4-b87d-3357a1210c6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:46.588928 containerd[1581]: time="2025-12-12T17:23:46.588896782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:23:46.589911 kubelet[2721]: E1212 17:23:46.589877 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-vmgm5" podUID="814ad701-33db-45b4-b87d-3357a1210c6d" Dec 12 17:23:46.785617 containerd[1581]: time="2025-12-12T17:23:46.785487968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:46.786682 containerd[1581]: time="2025-12-12T17:23:46.786636924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:23:46.786805 containerd[1581]: time="2025-12-12T17:23:46.786666524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:46.787220 kubelet[2721]: E1212 17:23:46.787177 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:46.787291 kubelet[2721]: E1212 17:23:46.787235 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:23:46.787841 kubelet[2721]: E1212 17:23:46.787376 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2fth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8546bdfd97-9gwkk_calico-apiserver(84025a67-a460-4acb-8e7e-d73c2b743a45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:46.788615 kubelet[2721]: E1212 17:23:46.788551 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8546bdfd97-9gwkk" podUID="84025a67-a460-4acb-8e7e-d73c2b743a45" Dec 12 17:23:49.389851 kubelet[2721]: E1212 17:23:49.389452 2721 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 17:23:49.391166 containerd[1581]: time="2025-12-12T17:23:49.391011349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:23:49.562166 containerd[1581]: time="2025-12-12T17:23:49.562119774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:49.563419 containerd[1581]: time="2025-12-12T17:23:49.563379091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:23:49.563525 containerd[1581]: time="2025-12-12T17:23:49.563464331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:49.563687 kubelet[2721]: E1212 17:23:49.563629 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:49.563728 kubelet[2721]: E1212 17:23:49.563699 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:23:49.563852 kubelet[2721]: E1212 17:23:49.563813 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8stxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7zbf_calico-system(1e0e30e4-6fdf-475c-9a4a-59287d927d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:49.567104 containerd[1581]: time="2025-12-12T17:23:49.567022005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:23:49.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.37:22-10.0.0.1:50026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:49.754617 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:23:49.754663 kernel: audit: type=1130 audit(1765560229.752:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.37:22-10.0.0.1:50026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:49.753692 systemd[1]: Started sshd@22-10.0.0.37:22-10.0.0.1:50026.service - OpenSSH per-connection server daemon (10.0.0.1:50026). Dec 12 17:23:49.788793 containerd[1581]: time="2025-12-12T17:23:49.788749662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:23:49.789823 containerd[1581]: time="2025-12-12T17:23:49.789779700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:23:49.789905 containerd[1581]: time="2025-12-12T17:23:49.789849340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:23:49.790180 kubelet[2721]: E1212 17:23:49.790141 2721 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:49.790258 kubelet[2721]: E1212 17:23:49.790195 2721 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:23:49.790373 kubelet[2721]: E1212 17:23:49.790331 2721 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8stxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7zbf_calico-system(1e0e30e4-6fdf-475c-9a4a-59287d927d5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:23:49.792109 kubelet[2721]: E1212 17:23:49.791485 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7zbf" podUID="1e0e30e4-6fdf-475c-9a4a-59287d927d5d" Dec 12 17:23:49.817000 audit[5350]: USER_ACCT pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.819236 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 50026 ssh2: RSA SHA256:wUd39nlallfs/Umj+eKA6R9QOY2s+GT2V6fjbzJSeiY Dec 12 17:23:49.823085 kernel: audit: type=1101 audit(1765560229.817:871): pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.823743 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:23:49.822000 audit[5350]: CRED_ACQ pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.828413 kernel: audit: type=1103 audit(1765560229.822:872): pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.828754 kernel: audit: type=1006 audit(1765560229.822:873): pid=5350 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 12 17:23:49.828773 kernel: audit: type=1300 audit(1765560229.822:873): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7fd1c30 a2=3 a3=0 items=0 ppid=1 pid=5350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:49.822000 audit[5350]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7fd1c30 a2=3 a3=0 items=0 ppid=1 pid=5350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:23:49.822000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:49.832507 kernel: audit: type=1327 audit(1765560229.822:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:23:49.836679 systemd-logind[1562]: New session 23 of user core. Dec 12 17:23:49.849296 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:23:49.850000 audit[5350]: USER_START pid=5350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.854000 audit[5353]: CRED_ACQ pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.858651 kernel: audit: type=1105 audit(1765560229.850:874): pid=5350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.858751 kernel: audit: type=1103 audit(1765560229.854:875): pid=5353 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.944883 sshd[5353]: Connection closed by 10.0.0.1 port 50026 Dec 12 17:23:49.946246 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Dec 12 17:23:49.946000 audit[5350]: USER_END pid=5350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.949000 audit[5350]: CRED_DISP pid=5350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.952227 systemd[1]: sshd@22-10.0.0.37:22-10.0.0.1:50026.service: Deactivated successfully. Dec 12 17:23:49.953157 kernel: audit: type=1106 audit(1765560229.946:876): pid=5350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.953289 kernel: audit: type=1104 audit(1765560229.949:877): pid=5350 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 12 17:23:49.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.37:22-10.0.0.1:50026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:23:49.955744 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:23:49.958454 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:23:49.959765 systemd-logind[1562]: Removed session 23. Dec 12 17:23:51.392808 kubelet[2721]: E1212 17:23:51.392724 2721 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c6bf566f-r6z8g" podUID="b80b51cf-55e4-4f30-8272-528fb61d0936"