Dec 16 02:15:06.291079 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 02:15:06.291103 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Dec 16 00:05:24 -00 2025 Dec 16 02:15:06.291113 kernel: KASLR enabled Dec 16 02:15:06.291120 kernel: efi: EFI v2.7 by EDK II Dec 16 02:15:06.291126 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 16 02:15:06.291132 kernel: random: crng init done Dec 16 02:15:06.291140 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 02:15:06.291146 kernel: secureboot: Secure boot enabled Dec 16 02:15:06.291154 kernel: ACPI: Early table checksum verification disabled Dec 16 02:15:06.291160 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 16 02:15:06.291166 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 16 02:15:06.291172 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291178 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291184 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291193 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291199 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291206 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291212 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291219 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291225 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 02:15:06.291232 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 16 02:15:06.291238 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 02:15:06.291246 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 02:15:06.291252 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 16 02:15:06.291259 kernel: Zone ranges: Dec 16 02:15:06.291265 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 02:15:06.291271 kernel: DMA32 empty Dec 16 02:15:06.291277 kernel: Normal empty Dec 16 02:15:06.291284 kernel: Device empty Dec 16 02:15:06.291290 kernel: Movable zone start for each node Dec 16 02:15:06.291296 kernel: Early memory node ranges Dec 16 02:15:06.291303 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 16 02:15:06.291309 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 16 02:15:06.291316 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 16 02:15:06.291323 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 16 02:15:06.291330 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 16 02:15:06.291336 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 16 02:15:06.291355 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 16 02:15:06.291363 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 16 02:15:06.291370 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 16 02:15:06.291381 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 16 02:15:06.291389 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 16 02:15:06.291396 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 16 02:15:06.291403 kernel: psci: probing for conduit method from ACPI. Dec 16 02:15:06.291410 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 02:15:06.291417 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 02:15:06.291424 kernel: psci: Trusted OS migration not required Dec 16 02:15:06.291431 kernel: psci: SMC Calling Convention v1.1 Dec 16 02:15:06.291440 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 02:15:06.291447 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 02:15:06.291454 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 02:15:06.291461 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 16 02:15:06.291468 kernel: Detected PIPT I-cache on CPU0 Dec 16 02:15:06.291475 kernel: CPU features: detected: GIC system register CPU interface Dec 16 02:15:06.291482 kernel: CPU features: detected: Spectre-v4 Dec 16 02:15:06.291489 kernel: CPU features: detected: Spectre-BHB Dec 16 02:15:06.291496 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 02:15:06.291503 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 02:15:06.291510 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 02:15:06.291518 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 02:15:06.291525 kernel: alternatives: applying boot alternatives Dec 16 02:15:06.291533 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:15:06.291541 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 02:15:06.291548 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 02:15:06.291555 kernel: Fallback order for Node 0: 0 Dec 16 02:15:06.291562 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 16 02:15:06.291568 kernel: Policy zone: DMA Dec 16 02:15:06.291575 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 02:15:06.291582 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 16 02:15:06.291604 kernel: software IO TLB: area num 4. Dec 16 02:15:06.291611 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 16 02:15:06.291618 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 16 02:15:06.291625 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 02:15:06.291632 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 02:15:06.291639 kernel: rcu: RCU event tracing is enabled. Dec 16 02:15:06.291646 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 02:15:06.291653 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 02:15:06.291660 kernel: Tracing variant of Tasks RCU enabled. Dec 16 02:15:06.291668 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 02:15:06.291675 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 02:15:06.291682 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 02:15:06.291691 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 02:15:06.291698 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 02:15:06.291705 kernel: GICv3: 256 SPIs implemented Dec 16 02:15:06.291712 kernel: GICv3: 0 Extended SPIs implemented Dec 16 02:15:06.291719 kernel: Root IRQ handler: gic_handle_irq Dec 16 02:15:06.291726 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 02:15:06.291733 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 02:15:06.291740 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 02:15:06.291747 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 02:15:06.291754 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 16 02:15:06.291761 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 16 02:15:06.291769 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 16 02:15:06.291777 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 16 02:15:06.291784 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 02:15:06.291790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 02:15:06.291797 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 02:15:06.291805 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 02:15:06.291812 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 02:15:06.291819 kernel: arm-pv: using stolen time PV Dec 16 02:15:06.291831 kernel: Console: colour dummy device 80x25 Dec 16 02:15:06.291842 kernel: ACPI: Core revision 20240827 Dec 16 02:15:06.291850 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 02:15:06.291857 kernel: pid_max: default: 32768 minimum: 301 Dec 16 02:15:06.291864 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 02:15:06.291872 kernel: landlock: Up and running. Dec 16 02:15:06.291879 kernel: SELinux: Initializing. Dec 16 02:15:06.291886 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 02:15:06.291896 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 02:15:06.291907 kernel: rcu: Hierarchical SRCU implementation. Dec 16 02:15:06.291915 kernel: rcu: Max phase no-delay instances is 400. Dec 16 02:15:06.291924 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 02:15:06.291933 kernel: Remapping and enabling EFI services. Dec 16 02:15:06.291943 kernel: smp: Bringing up secondary CPUs ... Dec 16 02:15:06.291950 kernel: Detected PIPT I-cache on CPU1 Dec 16 02:15:06.291958 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 02:15:06.291967 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 16 02:15:06.291975 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 02:15:06.291988 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 02:15:06.291998 kernel: Detected PIPT I-cache on CPU2 Dec 16 02:15:06.292006 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 16 02:15:06.292014 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 16 02:15:06.292022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 02:15:06.292029 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 16 02:15:06.292037 kernel: Detected PIPT I-cache on CPU3 Dec 16 02:15:06.292046 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 16 02:15:06.292056 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 16 02:15:06.292065 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 02:15:06.292073 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 16 02:15:06.292081 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 02:15:06.292090 kernel: SMP: Total of 4 processors activated. Dec 16 02:15:06.292104 kernel: CPU: All CPU(s) started at EL1 Dec 16 02:15:06.292112 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 02:15:06.292120 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 02:15:06.292128 kernel: CPU features: detected: Common not Private translations Dec 16 02:15:06.292138 kernel: CPU features: detected: CRC32 instructions Dec 16 02:15:06.292145 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 02:15:06.292155 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 02:15:06.292163 kernel: CPU features: detected: LSE atomic instructions Dec 16 02:15:06.292172 kernel: CPU features: detected: Privileged Access Never Dec 16 02:15:06.292179 kernel: CPU features: detected: RAS Extension Support Dec 16 02:15:06.292187 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 02:15:06.292195 kernel: alternatives: applying system-wide alternatives Dec 16 02:15:06.292203 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 16 02:15:06.292211 kernel: Memory: 2448740K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 101212K reserved, 16384K cma-reserved) Dec 16 02:15:06.292220 kernel: devtmpfs: initialized Dec 16 02:15:06.292228 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 02:15:06.292235 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 02:15:06.292243 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 02:15:06.292251 kernel: 0 pages in range for non-PLT usage Dec 16 02:15:06.292259 kernel: 515168 pages in range for PLT usage Dec 16 02:15:06.292267 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 02:15:06.292276 kernel: SMBIOS 3.0.0 present. Dec 16 02:15:06.292283 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 16 02:15:06.292291 kernel: DMI: Memory slots populated: 1/1 Dec 16 02:15:06.292299 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 02:15:06.292307 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 02:15:06.292315 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 02:15:06.292323 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 02:15:06.292332 kernel: audit: initializing netlink subsys (disabled) Dec 16 02:15:06.292346 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Dec 16 02:15:06.292355 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 02:15:06.292362 kernel: cpuidle: using governor menu Dec 16 02:15:06.292370 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 02:15:06.292378 kernel: ASID allocator initialised with 32768 entries Dec 16 02:15:06.292386 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 02:15:06.292412 kernel: Serial: AMBA PL011 UART driver Dec 16 02:15:06.292420 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 02:15:06.292428 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 02:15:06.292435 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 02:15:06.292443 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 02:15:06.292452 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 02:15:06.292459 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 02:15:06.292467 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 02:15:06.292478 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 02:15:06.292486 kernel: ACPI: Added _OSI(Module Device) Dec 16 02:15:06.292494 kernel: ACPI: Added _OSI(Processor Device) Dec 16 02:15:06.292501 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 02:15:06.292509 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 02:15:06.292516 kernel: ACPI: Interpreter enabled Dec 16 02:15:06.292524 kernel: ACPI: Using GIC for interrupt routing Dec 16 02:15:06.292533 kernel: ACPI: MCFG table detected, 1 entries Dec 16 02:15:06.292541 kernel: ACPI: CPU0 has been hot-added Dec 16 02:15:06.292548 kernel: ACPI: CPU1 has been hot-added Dec 16 02:15:06.292556 kernel: ACPI: CPU2 has been hot-added Dec 16 02:15:06.292563 kernel: ACPI: CPU3 has been hot-added Dec 16 02:15:06.292571 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 02:15:06.292579 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 02:15:06.292594 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 02:15:06.292773 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 02:15:06.292865 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 02:15:06.292950 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 02:15:06.293034 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 02:15:06.293119 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 02:15:06.293132 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 02:15:06.293140 kernel: PCI host bridge to bus 0000:00 Dec 16 02:15:06.293229 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 02:15:06.293305 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 02:15:06.293393 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 02:15:06.293477 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 02:15:06.293581 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 02:15:06.293697 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 02:15:06.293787 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 16 02:15:06.293884 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 16 02:15:06.293970 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 02:15:06.294055 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 02:15:06.294136 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 16 02:15:06.294217 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 16 02:15:06.294294 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 02:15:06.294383 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 02:15:06.294463 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 02:15:06.294475 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 02:15:06.294483 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 02:15:06.294491 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 02:15:06.294499 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 02:15:06.294507 kernel: iommu: Default domain type: Translated Dec 16 02:15:06.294514 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 02:15:06.294523 kernel: efivars: Registered efivars operations Dec 16 02:15:06.294533 kernel: vgaarb: loaded Dec 16 02:15:06.294541 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 02:15:06.294549 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 02:15:06.294557 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 02:15:06.294565 kernel: pnp: PnP ACPI init Dec 16 02:15:06.294674 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 02:15:06.294689 kernel: pnp: PnP ACPI: found 1 devices Dec 16 02:15:06.294697 kernel: NET: Registered PF_INET protocol family Dec 16 02:15:06.294705 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 02:15:06.294713 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 02:15:06.294721 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 02:15:06.294729 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 02:15:06.294737 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 02:15:06.294746 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 02:15:06.294754 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 02:15:06.294761 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 02:15:06.294769 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 02:15:06.294777 kernel: PCI: CLS 0 bytes, default 64 Dec 16 02:15:06.294785 kernel: kvm [1]: HYP mode not available Dec 16 02:15:06.294792 kernel: Initialise system trusted keyrings Dec 16 02:15:06.294800 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 02:15:06.294809 kernel: Key type asymmetric registered Dec 16 02:15:06.294816 kernel: Asymmetric key parser 'x509' registered Dec 16 02:15:06.294824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 02:15:06.294831 kernel: io scheduler mq-deadline registered Dec 16 02:15:06.294839 kernel: io scheduler kyber registered Dec 16 02:15:06.294847 kernel: io scheduler bfq registered Dec 16 02:15:06.294854 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 02:15:06.294863 kernel: ACPI: button: Power Button [PWRB] Dec 16 02:15:06.294872 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 02:15:06.294956 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 16 02:15:06.294967 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 02:15:06.294975 kernel: thunder_xcv, ver 1.0 Dec 16 02:15:06.294982 kernel: thunder_bgx, ver 1.0 Dec 16 02:15:06.294990 kernel: nicpf, ver 1.0 Dec 16 02:15:06.294999 kernel: nicvf, ver 1.0 Dec 16 02:15:06.295089 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 02:15:06.295168 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T02:15:05 UTC (1765851305) Dec 16 02:15:06.295178 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 02:15:06.295186 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 02:15:06.295193 kernel: watchdog: NMI not fully supported Dec 16 02:15:06.295203 kernel: watchdog: Hard watchdog permanently disabled Dec 16 02:15:06.295211 kernel: NET: Registered PF_INET6 protocol family Dec 16 02:15:06.295219 kernel: Segment Routing with IPv6 Dec 16 02:15:06.295226 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 02:15:06.295234 kernel: NET: Registered PF_PACKET protocol family Dec 16 02:15:06.295242 kernel: Key type dns_resolver registered Dec 16 02:15:06.295249 kernel: registered taskstats version 1 Dec 16 02:15:06.295257 kernel: Loading compiled-in X.509 certificates Dec 16 02:15:06.295266 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 545838337a91b65b763486e536766b3eec3ef99d' Dec 16 02:15:06.295273 kernel: Demotion targets for Node 0: null Dec 16 02:15:06.295281 kernel: Key type .fscrypt registered Dec 16 02:15:06.295289 kernel: Key type fscrypt-provisioning registered Dec 16 02:15:06.295296 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 02:15:06.295304 kernel: ima: Allocated hash algorithm: sha1 Dec 16 02:15:06.295313 kernel: ima: No architecture policies found Dec 16 02:15:06.295321 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 02:15:06.295328 kernel: clk: Disabling unused clocks Dec 16 02:15:06.295336 kernel: PM: genpd: Disabling unused power domains Dec 16 02:15:06.295352 kernel: Freeing unused kernel memory: 12480K Dec 16 02:15:06.295360 kernel: Run /init as init process Dec 16 02:15:06.295368 kernel: with arguments: Dec 16 02:15:06.295375 kernel: /init Dec 16 02:15:06.295385 kernel: with environment: Dec 16 02:15:06.295393 kernel: HOME=/ Dec 16 02:15:06.295401 kernel: TERM=linux Dec 16 02:15:06.295509 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 16 02:15:06.296059 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 16 02:15:06.296081 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 02:15:06.296094 kernel: GPT:16515071 != 27000831 Dec 16 02:15:06.296102 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 02:15:06.296110 kernel: GPT:16515071 != 27000831 Dec 16 02:15:06.296118 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 02:15:06.296126 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 02:15:06.296135 kernel: SCSI subsystem initialized Dec 16 02:15:06.296145 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 02:15:06.296157 kernel: device-mapper: uevent: version 1.0.3 Dec 16 02:15:06.296167 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 02:15:06.296177 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 02:15:06.296187 kernel: raid6: neonx8 gen() 15638 MB/s Dec 16 02:15:06.296197 kernel: raid6: neonx4 gen() 15588 MB/s Dec 16 02:15:06.296206 kernel: raid6: neonx2 gen() 13066 MB/s Dec 16 02:15:06.296215 kernel: raid6: neonx1 gen() 10398 MB/s Dec 16 02:15:06.296235 kernel: raid6: int64x8 gen() 6688 MB/s Dec 16 02:15:06.296244 kernel: raid6: int64x4 gen() 7220 MB/s Dec 16 02:15:06.296252 kernel: raid6: int64x2 gen() 6027 MB/s Dec 16 02:15:06.296260 kernel: raid6: int64x1 gen() 4935 MB/s Dec 16 02:15:06.296267 kernel: raid6: using algorithm neonx8 gen() 15638 MB/s Dec 16 02:15:06.296276 kernel: raid6: .... xor() 11350 MB/s, rmw enabled Dec 16 02:15:06.296283 kernel: raid6: using neon recovery algorithm Dec 16 02:15:06.296293 kernel: xor: measuring software checksum speed Dec 16 02:15:06.296301 kernel: 8regs : 21596 MB/sec Dec 16 02:15:06.296309 kernel: 32regs : 21687 MB/sec Dec 16 02:15:06.296316 kernel: arm64_neon : 26972 MB/sec Dec 16 02:15:06.296324 kernel: xor: using function: arm64_neon (26972 MB/sec) Dec 16 02:15:06.296332 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 02:15:06.296350 kernel: BTRFS: device fsid d00a2bc5-1c68-4957-aa37-d070193fcf05 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (204) Dec 16 02:15:06.296360 kernel: BTRFS info (device dm-0): first mount of filesystem d00a2bc5-1c68-4957-aa37-d070193fcf05 Dec 16 02:15:06.296371 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:15:06.296379 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 02:15:06.296387 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 02:15:06.296395 kernel: loop: module loaded Dec 16 02:15:06.296403 kernel: loop0: detected capacity change from 0 to 91832 Dec 16 02:15:06.296411 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 02:15:06.296420 systemd[1]: Successfully made /usr/ read-only. Dec 16 02:15:06.296432 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:15:06.296441 systemd[1]: Detected virtualization kvm. Dec 16 02:15:06.296449 systemd[1]: Detected architecture arm64. Dec 16 02:15:06.296465 systemd[1]: Running in initrd. Dec 16 02:15:06.296475 systemd[1]: No hostname configured, using default hostname. Dec 16 02:15:06.296487 systemd[1]: Hostname set to . Dec 16 02:15:06.296495 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 02:15:06.296504 systemd[1]: Queued start job for default target initrd.target. Dec 16 02:15:06.296512 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:15:06.296521 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:15:06.296529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:15:06.296538 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 02:15:06.296548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:15:06.296576 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 02:15:06.296597 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 02:15:06.296609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:15:06.296618 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:15:06.296629 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:15:06.296638 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:15:06.296646 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:15:06.296654 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:15:06.296664 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:15:06.296672 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:15:06.296680 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:15:06.296692 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:15:06.296701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 02:15:06.296709 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 02:15:06.296725 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:15:06.296734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:15:06.296752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:15:06.296762 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:15:06.296771 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 02:15:06.296780 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 02:15:06.296790 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:15:06.296799 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 02:15:06.296809 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 02:15:06.296819 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 02:15:06.296828 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:15:06.296836 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:15:06.296845 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:15:06.296855 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 02:15:06.296864 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:15:06.296873 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 02:15:06.296881 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 02:15:06.296920 systemd-journald[345]: Collecting audit messages is enabled. Dec 16 02:15:06.296943 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 02:15:06.296951 kernel: Bridge firewalling registered Dec 16 02:15:06.296960 systemd-journald[345]: Journal started Dec 16 02:15:06.296980 systemd-journald[345]: Runtime Journal (/run/log/journal/7795bdb1756940e5a0c16511442d0038) is 6M, max 48.5M, 42.4M free. Dec 16 02:15:06.296516 systemd-modules-load[346]: Inserted module 'br_netfilter' Dec 16 02:15:06.300521 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:15:06.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.301215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:15:06.307722 kernel: audit: type=1130 audit(1765851306.300:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.307757 kernel: audit: type=1130 audit(1765851306.303:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.307751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:15:06.312160 kernel: audit: type=1130 audit(1765851306.308:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.312171 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:15:06.317101 kernel: audit: type=1130 audit(1765851306.313:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.316635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 02:15:06.318842 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:15:06.320828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:15:06.327472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:15:06.334841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:15:06.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.341623 kernel: audit: type=1130 audit(1765851306.335:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.341946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:15:06.349526 kernel: audit: type=1130 audit(1765851306.344:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.349552 kernel: audit: type=1334 audit(1765851306.344:8): prog-id=6 op=LOAD Dec 16 02:15:06.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.344000 audit: BPF prog-id=6 op=LOAD Dec 16 02:15:06.342350 systemd-tmpfiles[368]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 02:15:06.346873 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:15:06.352974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:15:06.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.358618 kernel: audit: type=1130 audit(1765851306.355:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.362782 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:15:06.367684 kernel: audit: type=1130 audit(1765851306.363:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.365808 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 02:15:06.388846 dracut-cmdline[389]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=756b815c2fd7ac2947efceb2a88878d1ea9723ec85037c2b4d1a09bd798bb749 Dec 16 02:15:06.405276 systemd-resolved[381]: Positive Trust Anchors: Dec 16 02:15:06.405297 systemd-resolved[381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:15:06.405301 systemd-resolved[381]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:15:06.405332 systemd-resolved[381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:15:06.431793 systemd-resolved[381]: Defaulting to hostname 'linux'. Dec 16 02:15:06.432636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:15:06.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.433876 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:15:06.479620 kernel: Loading iSCSI transport class v2.0-870. Dec 16 02:15:06.488638 kernel: iscsi: registered transport (tcp) Dec 16 02:15:06.504606 kernel: iscsi: registered transport (qla4xxx) Dec 16 02:15:06.504669 kernel: QLogic iSCSI HBA Driver Dec 16 02:15:06.525734 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:15:06.547767 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:15:06.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.550071 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:15:06.597028 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 02:15:06.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.599409 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 02:15:06.601090 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 02:15:06.634258 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:15:06.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.635000 audit: BPF prog-id=7 op=LOAD Dec 16 02:15:06.635000 audit: BPF prog-id=8 op=LOAD Dec 16 02:15:06.637433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:15:06.665066 systemd-udevd[626]: Using default interface naming scheme 'v257'. Dec 16 02:15:06.673268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:15:06.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.675474 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 02:15:06.703039 dracut-pre-trigger[694]: rd.md=0: removing MD RAID activation Dec 16 02:15:06.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.704000 audit: BPF prog-id=9 op=LOAD Dec 16 02:15:06.703051 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:15:06.706960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:15:06.727271 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:15:06.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.729215 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:15:06.752167 systemd-networkd[735]: lo: Link UP Dec 16 02:15:06.752176 systemd-networkd[735]: lo: Gained carrier Dec 16 02:15:06.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.752635 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:15:06.754252 systemd[1]: Reached target network.target - Network. Dec 16 02:15:06.788188 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:15:06.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.792325 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 02:15:06.848629 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 02:15:06.858325 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 02:15:06.866779 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 02:15:06.874075 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 02:15:06.879122 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 02:15:06.883983 systemd-networkd[735]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:15:06.883991 systemd-networkd[735]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:15:06.884769 systemd-networkd[735]: eth0: Link UP Dec 16 02:15:06.884924 systemd-networkd[735]: eth0: Gained carrier Dec 16 02:15:06.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.884935 systemd-networkd[735]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:15:06.886894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:15:06.887013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:15:06.889733 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:15:06.895296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:15:06.897649 systemd-networkd[735]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 02:15:06.903464 disk-uuid[797]: Primary Header is updated. Dec 16 02:15:06.903464 disk-uuid[797]: Secondary Entries is updated. Dec 16 02:15:06.903464 disk-uuid[797]: Secondary Header is updated. Dec 16 02:15:06.909575 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 02:15:06.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.914529 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:15:06.924605 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:15:06.925782 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:15:06.929712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 02:15:06.931737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:15:06.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:06.957765 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:15:06.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:07.937028 disk-uuid[802]: Warning: The kernel is still using the old partition table. Dec 16 02:15:07.937028 disk-uuid[802]: The new table will be used at the next reboot or after you Dec 16 02:15:07.937028 disk-uuid[802]: run partprobe(8) or kpartx(8) Dec 16 02:15:07.937028 disk-uuid[802]: The operation has completed successfully. Dec 16 02:15:07.943464 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 02:15:07.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:07.943575 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 02:15:07.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:07.947570 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 02:15:07.987246 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (828) Dec 16 02:15:07.987287 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:15:07.988230 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:15:07.990870 kernel: BTRFS info (device vda6): turning on async discard Dec 16 02:15:07.990897 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 02:15:07.996663 kernel: BTRFS info (device vda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:15:07.997613 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 02:15:07.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:07.999721 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 02:15:08.092001 ignition[847]: Ignition 2.24.0 Dec 16 02:15:08.092016 ignition[847]: Stage: fetch-offline Dec 16 02:15:08.092061 ignition[847]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:15:08.092071 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 02:15:08.092219 ignition[847]: parsed url from cmdline: "" Dec 16 02:15:08.092222 ignition[847]: no config URL provided Dec 16 02:15:08.092227 ignition[847]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 02:15:08.092235 ignition[847]: no config at "/usr/lib/ignition/user.ign" Dec 16 02:15:08.092271 ignition[847]: op(1): [started] loading QEMU firmware config module Dec 16 02:15:08.092278 ignition[847]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 02:15:08.098566 ignition[847]: op(1): [finished] loading QEMU firmware config module Dec 16 02:15:08.143072 ignition[847]: parsing config with SHA512: e98184901ef32ee20922f99bb13eab00628f068450be78c81e3b2d6bf2a91882fcfcb5d81b916daddfecba270f2958eb78e356348ba87df68ca633289876b57e Dec 16 02:15:08.148609 unknown[847]: fetched base config from "system" Dec 16 02:15:08.148618 unknown[847]: fetched user config from "qemu" Dec 16 02:15:08.148977 ignition[847]: fetch-offline: fetch-offline passed Dec 16 02:15:08.149043 ignition[847]: Ignition finished successfully Dec 16 02:15:08.151210 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:15:08.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:08.153048 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 02:15:08.153912 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 02:15:08.180376 ignition[861]: Ignition 2.24.0 Dec 16 02:15:08.180392 ignition[861]: Stage: kargs Dec 16 02:15:08.180540 ignition[861]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:15:08.180552 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 02:15:08.181344 ignition[861]: kargs: kargs passed Dec 16 02:15:08.181399 ignition[861]: Ignition finished successfully Dec 16 02:15:08.185895 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 02:15:08.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:08.188130 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 02:15:08.209607 ignition[868]: Ignition 2.24.0 Dec 16 02:15:08.209622 ignition[868]: Stage: disks Dec 16 02:15:08.209773 ignition[868]: no configs at "/usr/lib/ignition/base.d" Dec 16 02:15:08.209781 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 02:15:08.210546 ignition[868]: disks: disks passed Dec 16 02:15:08.210607 ignition[868]: Ignition finished successfully Dec 16 02:15:08.214605 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 02:15:08.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:08.215930 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 02:15:08.217546 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 02:15:08.219642 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:15:08.221566 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:15:08.223322 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:15:08.226002 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 02:15:08.243763 systemd-networkd[735]: eth0: Gained IPv6LL Dec 16 02:15:08.268196 systemd-fsck[877]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 02:15:08.274643 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 02:15:08.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:08.276878 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 02:15:08.342628 kernel: EXT4-fs (vda9): mounted filesystem 0e69f709-36a9-4e15-b0c9-c7e150185653 r/w with ordered data mode. Quota mode: none. Dec 16 02:15:08.343233 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 02:15:08.344636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 02:15:08.347243 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:15:08.348967 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 02:15:08.350000 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 02:15:08.350040 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 02:15:08.350066 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:15:08.367458 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 02:15:08.370282 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 02:15:08.374852 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Dec 16 02:15:08.374876 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:15:08.374886 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:15:08.378199 kernel: BTRFS info (device vda6): turning on async discard Dec 16 02:15:08.378249 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 02:15:08.379436 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:15:08.494271 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 02:15:08.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:08.496692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 02:15:08.498210 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 02:15:08.524186 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 02:15:08.526611 kernel: BTRFS info (device vda6): last unmount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:15:08.535722 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 02:15:08.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:08.547712 ignition[984]: INFO : Ignition 2.24.0 Dec 16 02:15:08.547712 ignition[984]: INFO : Stage: mount Dec 16 02:15:08.549212 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:15:08.549212 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 02:15:08.549212 ignition[984]: INFO : mount: mount passed Dec 16 02:15:08.549212 ignition[984]: INFO : Ignition finished successfully Dec 16 02:15:08.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:08.551009 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 02:15:08.553174 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 02:15:09.344701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 02:15:09.374617 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Dec 16 02:15:09.376709 kernel: BTRFS info (device vda6): first mount of filesystem eb4bb268-dde2-45a9-b660-8899d8790a47 Dec 16 02:15:09.376743 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 02:15:09.379197 kernel: BTRFS info (device vda6): turning on async discard Dec 16 02:15:09.379214 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 02:15:09.380515 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 02:15:09.408264 ignition[1012]: INFO : Ignition 2.24.0 Dec 16 02:15:09.408264 ignition[1012]: INFO : Stage: files Dec 16 02:15:09.409969 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:15:09.409969 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 02:15:09.409969 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Dec 16 02:15:09.412837 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 02:15:09.412837 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 02:15:09.415637 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 02:15:09.415637 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 02:15:09.415637 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 02:15:09.415637 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 02:15:09.414307 unknown[1012]: wrote ssh authorized keys file for user: core Dec 16 02:15:09.421877 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 02:15:09.503705 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 02:15:09.818802 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 02:15:09.818802 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 02:15:09.822440 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 02:15:09.822440 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:15:09.822440 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 02:15:09.822440 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:15:09.822440 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 02:15:09.822440 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:15:09.822440 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 02:15:09.834410 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:15:09.834410 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 02:15:09.834410 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 02:15:09.834410 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 02:15:09.834410 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 02:15:09.834410 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 02:15:10.190450 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 02:15:10.553242 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 02:15:10.553242 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 02:15:10.557210 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:15:10.557210 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 02:15:10.557210 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 02:15:10.557210 ignition[1012]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 02:15:10.557210 ignition[1012]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 02:15:10.557210 ignition[1012]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 02:15:10.557210 ignition[1012]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 02:15:10.557210 ignition[1012]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 02:15:10.577509 ignition[1012]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 02:15:10.581734 ignition[1012]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 02:15:10.583139 ignition[1012]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 02:15:10.583139 ignition[1012]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 02:15:10.583139 ignition[1012]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 02:15:10.583139 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:15:10.583139 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 02:15:10.583139 ignition[1012]: INFO : files: files passed Dec 16 02:15:10.583139 ignition[1012]: INFO : Ignition finished successfully Dec 16 02:15:10.599237 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 16 02:15:10.599265 kernel: audit: type=1130 audit(1765851310.586:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.584117 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 02:15:10.591049 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 02:15:10.592936 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 02:15:10.608115 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 02:15:10.608244 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 02:15:10.611200 initrd-setup-root-after-ignition[1043]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 02:15:10.617136 kernel: audit: type=1130 audit(1765851310.610:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.617163 kernel: audit: type=1131 audit(1765851310.610:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.617243 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:15:10.617243 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:15:10.620530 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 02:15:10.620417 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:15:10.629078 kernel: audit: type=1130 audit(1765851310.623:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.624312 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 02:15:10.628914 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 02:15:10.675492 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 02:15:10.676629 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 02:15:10.683435 kernel: audit: type=1130 audit(1765851310.677:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.683464 kernel: audit: type=1131 audit(1765851310.677:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.677994 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 02:15:10.684429 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 02:15:10.686398 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 02:15:10.687317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 02:15:10.716318 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:15:10.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.718825 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 02:15:10.722789 kernel: audit: type=1130 audit(1765851310.717:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.737791 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 02:15:10.738011 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:15:10.739950 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:15:10.741796 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 02:15:10.743487 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 02:15:10.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.743637 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 02:15:10.749219 kernel: audit: type=1131 audit(1765851310.744:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.748287 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 02:15:10.750173 systemd[1]: Stopped target basic.target - Basic System. Dec 16 02:15:10.751852 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 02:15:10.753695 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 02:15:10.755813 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 02:15:10.757801 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 02:15:10.759661 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 02:15:10.761548 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 02:15:10.763454 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 02:15:10.765393 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 02:15:10.767179 systemd[1]: Stopped target swap.target - Swaps. Dec 16 02:15:10.768614 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 02:15:10.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.768755 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 02:15:10.774313 kernel: audit: type=1131 audit(1765851310.770:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.773418 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:15:10.775498 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:15:10.777436 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 02:15:10.780630 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:15:10.781805 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 02:15:10.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.781928 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 02:15:10.787759 kernel: audit: type=1131 audit(1765851310.783:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.786886 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 02:15:10.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.787014 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 02:15:10.788966 systemd[1]: Stopped target paths.target - Path Units. Dec 16 02:15:10.790441 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 02:15:10.795651 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:15:10.796871 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 02:15:10.798878 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 02:15:10.800392 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 02:15:10.800480 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 02:15:10.801940 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 02:15:10.802020 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 02:15:10.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.803473 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 02:15:10.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.803548 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:15:10.805235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 02:15:10.805358 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 02:15:10.806984 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 02:15:10.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.807089 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 02:15:10.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.809684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 02:15:10.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.812259 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 02:15:10.813298 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 02:15:10.813454 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:15:10.815665 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 02:15:10.815776 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:15:10.817467 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 02:15:10.817579 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 02:15:10.823181 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 02:15:10.828773 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 02:15:10.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.838671 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 02:15:10.840950 ignition[1070]: INFO : Ignition 2.24.0 Dec 16 02:15:10.840950 ignition[1070]: INFO : Stage: umount Dec 16 02:15:10.843345 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 02:15:10.843345 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 02:15:10.843345 ignition[1070]: INFO : umount: umount passed Dec 16 02:15:10.843345 ignition[1070]: INFO : Ignition finished successfully Dec 16 02:15:10.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.843509 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 02:15:10.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.843664 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 02:15:10.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.847060 systemd[1]: Stopped target network.target - Network. Dec 16 02:15:10.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.849534 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 02:15:10.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.849625 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 02:15:10.852630 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 02:15:10.852692 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 02:15:10.853656 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 02:15:10.853704 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 02:15:10.856165 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 02:15:10.856214 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 02:15:10.857975 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 02:15:10.859686 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 02:15:10.870255 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 02:15:10.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.870380 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 02:15:10.875672 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 02:15:10.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.875904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 02:15:10.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.877094 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 02:15:10.879000 audit: BPF prog-id=6 op=UNLOAD Dec 16 02:15:10.877141 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 02:15:10.879980 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 02:15:10.881636 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 02:15:10.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.885532 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 02:15:10.885000 audit: BPF prog-id=9 op=UNLOAD Dec 16 02:15:10.886714 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 02:15:10.886748 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:15:10.889381 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 02:15:10.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.890234 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 02:15:10.890294 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 02:15:10.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.892333 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 02:15:10.892378 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:15:10.894061 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 02:15:10.894105 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 02:15:10.896150 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:15:10.915956 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 02:15:10.919757 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:15:10.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.921234 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 02:15:10.921273 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 02:15:10.923168 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 02:15:10.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.923200 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:15:10.924910 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 02:15:10.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.924962 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 02:15:10.927501 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 02:15:10.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.927555 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 02:15:10.930292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 02:15:10.930355 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 02:15:10.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.934242 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 02:15:10.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.935651 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 02:15:10.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.935717 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:15:10.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.937816 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 02:15:10.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.937893 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:15:10.940144 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 02:15:10.940192 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:15:10.942307 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 02:15:10.942360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:15:10.944401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 02:15:10.944449 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:15:10.947235 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 02:15:10.965691 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 02:15:10.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.970689 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 02:15:10.971646 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 02:15:10.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:10.972971 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 02:15:10.975481 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 02:15:10.992402 systemd[1]: Switching root. Dec 16 02:15:11.028864 systemd-journald[345]: Journal stopped Dec 16 02:15:11.840251 systemd-journald[345]: Received SIGTERM from PID 1 (systemd). Dec 16 02:15:11.840300 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 02:15:11.840315 kernel: SELinux: policy capability open_perms=1 Dec 16 02:15:11.840342 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 02:15:11.840358 kernel: SELinux: policy capability always_check_network=0 Dec 16 02:15:11.840368 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 02:15:11.840381 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 02:15:11.840391 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 02:15:11.840402 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 02:15:11.840412 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 02:15:11.840422 systemd[1]: Successfully loaded SELinux policy in 61.132ms. Dec 16 02:15:11.840439 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.839ms. Dec 16 02:15:11.840451 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 02:15:11.840464 systemd[1]: Detected virtualization kvm. Dec 16 02:15:11.840476 systemd[1]: Detected architecture arm64. Dec 16 02:15:11.840487 systemd[1]: Detected first boot. Dec 16 02:15:11.840497 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 02:15:11.840511 zram_generator::config[1115]: No configuration found. Dec 16 02:15:11.840523 kernel: NET: Registered PF_VSOCK protocol family Dec 16 02:15:11.840534 systemd[1]: Populated /etc with preset unit settings. Dec 16 02:15:11.840546 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 02:15:11.840559 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 02:15:11.840570 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 02:15:11.840581 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 02:15:11.840605 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 02:15:11.840617 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 02:15:11.840631 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 02:15:11.840644 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 02:15:11.840655 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 02:15:11.840666 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 02:15:11.840676 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 02:15:11.840687 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 02:15:11.840698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 02:15:11.840709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 02:15:11.840721 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 02:15:11.840732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 02:15:11.840742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 02:15:11.840753 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 02:15:11.840763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 02:15:11.840774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 02:15:11.840784 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 02:15:11.840796 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 02:15:11.840807 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 02:15:11.840818 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 02:15:11.840830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 02:15:11.840841 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 02:15:11.840852 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 02:15:11.840867 systemd[1]: Reached target slices.target - Slice Units. Dec 16 02:15:11.840878 systemd[1]: Reached target swap.target - Swaps. Dec 16 02:15:11.840889 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 02:15:11.840900 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 02:15:11.840911 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 02:15:11.840922 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 02:15:11.840933 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 02:15:11.840945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 02:15:11.840956 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 02:15:11.840967 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 02:15:11.840978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 02:15:11.840989 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 02:15:11.840999 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 02:15:11.841010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 02:15:11.841022 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 02:15:11.841033 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 02:15:11.841044 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 02:15:11.841054 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 02:15:11.841065 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 02:15:11.841076 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 02:15:11.841087 systemd[1]: Reached target machines.target - Containers. Dec 16 02:15:11.841100 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 02:15:11.841111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:15:11.841122 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 02:15:11.841132 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 02:15:11.841146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:15:11.841157 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:15:11.841169 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:15:11.841181 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 02:15:11.841191 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:15:11.841202 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 02:15:11.841214 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 02:15:11.841224 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 02:15:11.841235 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 02:15:11.841247 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 02:15:11.841257 kernel: fuse: init (API version 7.41) Dec 16 02:15:11.841268 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:15:11.841279 kernel: ACPI: bus type drm_connector registered Dec 16 02:15:11.841289 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 02:15:11.841301 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 02:15:11.841313 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 02:15:11.841331 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 02:15:11.841344 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 02:15:11.841355 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 02:15:11.841365 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 02:15:11.841462 systemd-journald[1187]: Collecting audit messages is enabled. Dec 16 02:15:11.841492 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 02:15:11.841504 systemd-journald[1187]: Journal started Dec 16 02:15:11.841527 systemd-journald[1187]: Runtime Journal (/run/log/journal/7795bdb1756940e5a0c16511442d0038) is 6M, max 48.5M, 42.4M free. Dec 16 02:15:11.704000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 02:15:11.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.801000 audit: BPF prog-id=14 op=UNLOAD Dec 16 02:15:11.801000 audit: BPF prog-id=13 op=UNLOAD Dec 16 02:15:11.804000 audit: BPF prog-id=15 op=LOAD Dec 16 02:15:11.809000 audit: BPF prog-id=16 op=LOAD Dec 16 02:15:11.809000 audit: BPF prog-id=17 op=LOAD Dec 16 02:15:11.839000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 02:15:11.839000 audit[1187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff5ff2150 a2=4000 a3=0 items=0 ppid=1 pid=1187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:11.839000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 02:15:11.609154 systemd[1]: Queued start job for default target multi-user.target. Dec 16 02:15:11.634004 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 02:15:11.634732 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 02:15:11.843413 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 02:15:11.845365 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 02:15:11.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.846943 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 02:15:11.848106 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 02:15:11.849328 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 02:15:11.851690 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 02:15:11.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.853087 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 02:15:11.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.854524 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 02:15:11.854725 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 02:15:11.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.856020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:15:11.856222 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:15:11.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.857535 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:15:11.857833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:15:11.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.859053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:15:11.859201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:15:11.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.860756 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 02:15:11.860934 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 02:15:11.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.862221 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:15:11.862390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:15:11.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.863814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 02:15:11.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.865343 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 02:15:11.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.867417 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 02:15:11.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.869766 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 02:15:11.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.881917 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 02:15:11.883360 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 02:15:11.885544 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 02:15:11.887575 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 02:15:11.888645 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 02:15:11.888685 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 02:15:11.890506 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 02:15:11.892208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:15:11.892332 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:15:11.899458 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 02:15:11.901532 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 02:15:11.902807 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:15:11.903985 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 02:15:11.905132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:15:11.906426 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 02:15:11.912674 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 02:15:11.916330 systemd-journald[1187]: Time spent on flushing to /var/log/journal/7795bdb1756940e5a0c16511442d0038 is 14.541ms for 1001 entries. Dec 16 02:15:11.916330 systemd-journald[1187]: System Journal (/var/log/journal/7795bdb1756940e5a0c16511442d0038) is 8M, max 163.5M, 155.5M free. Dec 16 02:15:11.934738 systemd-journald[1187]: Received client request to flush runtime journal. Dec 16 02:15:11.934772 kernel: loop1: detected capacity change from 0 to 45344 Dec 16 02:15:11.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.914939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 02:15:11.921671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 02:15:11.924171 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 02:15:11.926818 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 02:15:11.928290 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 02:15:11.932723 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 02:15:11.936111 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Dec 16 02:15:11.936130 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Dec 16 02:15:11.936745 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 02:15:11.939084 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 02:15:11.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.943663 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 02:15:11.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.948066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 02:15:11.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.951086 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 02:15:11.960034 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 02:15:11.962113 kernel: loop2: detected capacity change from 0 to 207008 Dec 16 02:15:11.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.977075 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 02:15:11.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:11.978000 audit: BPF prog-id=18 op=LOAD Dec 16 02:15:11.978000 audit: BPF prog-id=19 op=LOAD Dec 16 02:15:11.978000 audit: BPF prog-id=20 op=LOAD Dec 16 02:15:11.979917 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 02:15:11.981000 audit: BPF prog-id=21 op=LOAD Dec 16 02:15:11.982292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 02:15:11.986744 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 02:15:11.988596 kernel: loop3: detected capacity change from 0 to 100192 Dec 16 02:15:11.988000 audit: BPF prog-id=22 op=LOAD Dec 16 02:15:11.989000 audit: BPF prog-id=23 op=LOAD Dec 16 02:15:11.989000 audit: BPF prog-id=24 op=LOAD Dec 16 02:15:11.990629 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 02:15:11.992000 audit: BPF prog-id=25 op=LOAD Dec 16 02:15:11.992000 audit: BPF prog-id=26 op=LOAD Dec 16 02:15:11.992000 audit: BPF prog-id=27 op=LOAD Dec 16 02:15:11.994082 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 02:15:12.014059 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 16 02:15:12.014079 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Dec 16 02:15:12.018802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 02:15:12.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.020610 kernel: loop4: detected capacity change from 0 to 45344 Dec 16 02:15:12.027620 kernel: loop5: detected capacity change from 0 to 207008 Dec 16 02:15:12.029866 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 02:15:12.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.036608 kernel: loop6: detected capacity change from 0 to 100192 Dec 16 02:15:12.037272 systemd-nsresourced[1255]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 02:15:12.038241 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 02:15:12.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.042772 (sd-merge)[1259]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 16 02:15:12.047059 (sd-merge)[1259]: Merged extensions into '/usr'. Dec 16 02:15:12.050817 systemd[1]: Reload requested from client PID 1232 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 02:15:12.050833 systemd[1]: Reloading... Dec 16 02:15:12.104512 systemd-oomd[1252]: No swap; memory pressure usage will be degraded Dec 16 02:15:12.110348 systemd-resolved[1253]: Positive Trust Anchors: Dec 16 02:15:12.110366 systemd-resolved[1253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 02:15:12.110370 systemd-resolved[1253]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 02:15:12.110402 systemd-resolved[1253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 02:15:12.111605 zram_generator::config[1302]: No configuration found. Dec 16 02:15:12.120838 systemd-resolved[1253]: Defaulting to hostname 'linux'. Dec 16 02:15:12.264461 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 02:15:12.264618 systemd[1]: Reloading finished in 213 ms. Dec 16 02:15:12.295514 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 02:15:12.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.296917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 02:15:12.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.298281 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 02:15:12.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.301871 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 02:15:12.317015 systemd[1]: Starting ensure-sysext.service... Dec 16 02:15:12.318902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 02:15:12.319000 audit: BPF prog-id=28 op=LOAD Dec 16 02:15:12.319000 audit: BPF prog-id=25 op=UNLOAD Dec 16 02:15:12.320000 audit: BPF prog-id=29 op=LOAD Dec 16 02:15:12.320000 audit: BPF prog-id=30 op=LOAD Dec 16 02:15:12.320000 audit: BPF prog-id=26 op=UNLOAD Dec 16 02:15:12.320000 audit: BPF prog-id=27 op=UNLOAD Dec 16 02:15:12.320000 audit: BPF prog-id=31 op=LOAD Dec 16 02:15:12.320000 audit: BPF prog-id=15 op=UNLOAD Dec 16 02:15:12.320000 audit: BPF prog-id=32 op=LOAD Dec 16 02:15:12.320000 audit: BPF prog-id=33 op=LOAD Dec 16 02:15:12.320000 audit: BPF prog-id=16 op=UNLOAD Dec 16 02:15:12.320000 audit: BPF prog-id=17 op=UNLOAD Dec 16 02:15:12.321000 audit: BPF prog-id=34 op=LOAD Dec 16 02:15:12.321000 audit: BPF prog-id=21 op=UNLOAD Dec 16 02:15:12.322000 audit: BPF prog-id=35 op=LOAD Dec 16 02:15:12.322000 audit: BPF prog-id=18 op=UNLOAD Dec 16 02:15:12.322000 audit: BPF prog-id=36 op=LOAD Dec 16 02:15:12.322000 audit: BPF prog-id=37 op=LOAD Dec 16 02:15:12.322000 audit: BPF prog-id=19 op=UNLOAD Dec 16 02:15:12.322000 audit: BPF prog-id=20 op=UNLOAD Dec 16 02:15:12.323000 audit: BPF prog-id=38 op=LOAD Dec 16 02:15:12.323000 audit: BPF prog-id=22 op=UNLOAD Dec 16 02:15:12.323000 audit: BPF prog-id=39 op=LOAD Dec 16 02:15:12.323000 audit: BPF prog-id=40 op=LOAD Dec 16 02:15:12.323000 audit: BPF prog-id=23 op=UNLOAD Dec 16 02:15:12.323000 audit: BPF prog-id=24 op=UNLOAD Dec 16 02:15:12.326324 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 02:15:12.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.329000 audit: BPF prog-id=8 op=UNLOAD Dec 16 02:15:12.329000 audit: BPF prog-id=7 op=UNLOAD Dec 16 02:15:12.330000 audit: BPF prog-id=41 op=LOAD Dec 16 02:15:12.330000 audit: BPF prog-id=42 op=LOAD Dec 16 02:15:12.331521 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 02:15:12.333042 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Dec 16 02:15:12.333057 systemd[1]: Reloading... Dec 16 02:15:12.334104 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 02:15:12.334140 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 02:15:12.334561 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 02:15:12.335536 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Dec 16 02:15:12.335603 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Dec 16 02:15:12.340961 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:15:12.340971 systemd-tmpfiles[1336]: Skipping /boot Dec 16 02:15:12.347959 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 02:15:12.347973 systemd-tmpfiles[1336]: Skipping /boot Dec 16 02:15:12.357444 systemd-udevd[1339]: Using default interface naming scheme 'v257'. Dec 16 02:15:12.400646 zram_generator::config[1385]: No configuration found. Dec 16 02:15:12.601991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 02:15:12.603725 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 02:15:12.603914 systemd[1]: Reloading finished in 270 ms. Dec 16 02:15:12.623422 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 02:15:12.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.625000 audit: BPF prog-id=43 op=LOAD Dec 16 02:15:12.625000 audit: BPF prog-id=35 op=UNLOAD Dec 16 02:15:12.627000 audit: BPF prog-id=44 op=LOAD Dec 16 02:15:12.627000 audit: BPF prog-id=45 op=LOAD Dec 16 02:15:12.627000 audit: BPF prog-id=36 op=UNLOAD Dec 16 02:15:12.627000 audit: BPF prog-id=37 op=UNLOAD Dec 16 02:15:12.627000 audit: BPF prog-id=46 op=LOAD Dec 16 02:15:12.627000 audit: BPF prog-id=31 op=UNLOAD Dec 16 02:15:12.627000 audit: BPF prog-id=47 op=LOAD Dec 16 02:15:12.627000 audit: BPF prog-id=48 op=LOAD Dec 16 02:15:12.627000 audit: BPF prog-id=32 op=UNLOAD Dec 16 02:15:12.627000 audit: BPF prog-id=33 op=UNLOAD Dec 16 02:15:12.628000 audit: BPF prog-id=49 op=LOAD Dec 16 02:15:12.628000 audit: BPF prog-id=38 op=UNLOAD Dec 16 02:15:12.628000 audit: BPF prog-id=50 op=LOAD Dec 16 02:15:12.628000 audit: BPF prog-id=51 op=LOAD Dec 16 02:15:12.628000 audit: BPF prog-id=39 op=UNLOAD Dec 16 02:15:12.628000 audit: BPF prog-id=40 op=UNLOAD Dec 16 02:15:12.629000 audit: BPF prog-id=52 op=LOAD Dec 16 02:15:12.642000 audit: BPF prog-id=28 op=UNLOAD Dec 16 02:15:12.642000 audit: BPF prog-id=53 op=LOAD Dec 16 02:15:12.642000 audit: BPF prog-id=54 op=LOAD Dec 16 02:15:12.642000 audit: BPF prog-id=29 op=UNLOAD Dec 16 02:15:12.642000 audit: BPF prog-id=30 op=UNLOAD Dec 16 02:15:12.643000 audit: BPF prog-id=55 op=LOAD Dec 16 02:15:12.643000 audit: BPF prog-id=56 op=LOAD Dec 16 02:15:12.643000 audit: BPF prog-id=41 op=UNLOAD Dec 16 02:15:12.643000 audit: BPF prog-id=42 op=UNLOAD Dec 16 02:15:12.644000 audit: BPF prog-id=57 op=LOAD Dec 16 02:15:12.644000 audit: BPF prog-id=34 op=UNLOAD Dec 16 02:15:12.647665 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 02:15:12.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.666646 systemd[1]: Finished ensure-sysext.service. Dec 16 02:15:12.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.682242 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:15:12.684357 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 02:15:12.685629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 02:15:12.686517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 02:15:12.703504 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 02:15:12.706755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 02:15:12.711226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 02:15:12.712726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 02:15:12.712834 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 02:15:12.713806 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 02:15:12.717114 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 02:15:12.718793 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 02:15:12.725552 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 02:15:12.727000 audit: BPF prog-id=58 op=LOAD Dec 16 02:15:12.730726 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 02:15:12.732000 audit: BPF prog-id=59 op=LOAD Dec 16 02:15:12.733813 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 02:15:12.736354 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 02:15:12.740849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 02:15:12.743055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 02:15:12.743296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 02:15:12.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.744900 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 02:15:12.745062 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 02:15:12.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.746837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 02:15:12.747311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 02:15:12.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.748938 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 02:15:12.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.752826 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 02:15:12.754524 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 02:15:12.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.756093 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 02:15:12.755000 audit[1472]: SYSTEM_BOOT pid=1472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.762743 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 02:15:12.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:12.765000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 02:15:12.765000 audit[1483]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffca42ea50 a2=420 a3=0 items=0 ppid=1444 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:12.765000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:15:12.766648 augenrules[1483]: No rules Dec 16 02:15:12.770174 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:15:12.770436 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:15:12.776810 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 02:15:12.778943 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 02:15:12.779029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 02:15:12.779079 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 02:15:12.786792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 02:15:12.819140 systemd-networkd[1468]: lo: Link UP Dec 16 02:15:12.819148 systemd-networkd[1468]: lo: Gained carrier Dec 16 02:15:12.820404 systemd-networkd[1468]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:15:12.820414 systemd-networkd[1468]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 02:15:12.820462 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 02:15:12.822065 systemd-networkd[1468]: eth0: Link UP Dec 16 02:15:12.822235 systemd[1]: Reached target network.target - Network. Dec 16 02:15:12.822576 systemd-networkd[1468]: eth0: Gained carrier Dec 16 02:15:12.822611 systemd-networkd[1468]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 02:15:12.824688 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 02:15:12.828730 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 02:15:12.830095 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 02:15:12.831492 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 02:15:12.847694 systemd-networkd[1468]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 02:15:12.848250 systemd-timesyncd[1470]: Network configuration changed, trying to establish connection. Dec 16 02:15:12.850042 systemd-timesyncd[1470]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 02:15:12.850167 systemd-timesyncd[1470]: Initial clock synchronization to Tue 2025-12-16 02:15:13.134016 UTC. Dec 16 02:15:12.857275 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 02:15:12.985254 ldconfig[1456]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 02:15:12.989140 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 02:15:12.991713 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 02:15:13.020691 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 02:15:13.022095 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 02:15:13.023289 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 02:15:13.024539 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 02:15:13.025978 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 02:15:13.027089 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 02:15:13.028343 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 02:15:13.029680 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 02:15:13.030724 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 02:15:13.031883 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 02:15:13.031921 systemd[1]: Reached target paths.target - Path Units. Dec 16 02:15:13.032767 systemd[1]: Reached target timers.target - Timer Units. Dec 16 02:15:13.034352 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 02:15:13.036865 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 02:15:13.040254 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 02:15:13.041677 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 02:15:13.043019 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 02:15:13.056652 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 02:15:13.057923 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 02:15:13.059710 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 02:15:13.060805 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 02:15:13.061705 systemd[1]: Reached target basic.target - Basic System. Dec 16 02:15:13.062598 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:15:13.062646 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 02:15:13.063647 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 02:15:13.065663 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 02:15:13.067505 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 02:15:13.069553 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 02:15:13.071602 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 02:15:13.072678 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 02:15:13.073665 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 02:15:13.076218 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 02:15:13.077534 jq[1514]: false Dec 16 02:15:13.079774 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 02:15:13.082369 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 02:15:13.085081 extend-filesystems[1515]: Found /dev/vda6 Dec 16 02:15:13.087730 extend-filesystems[1515]: Found /dev/vda9 Dec 16 02:15:13.087813 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 02:15:13.088978 extend-filesystems[1515]: Checking size of /dev/vda9 Dec 16 02:15:13.089370 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 02:15:13.089814 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 02:15:13.090783 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 02:15:13.093212 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 02:15:13.099023 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 02:15:13.100540 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 02:15:13.101023 jq[1534]: true Dec 16 02:15:13.101422 extend-filesystems[1515]: Resized partition /dev/vda9 Dec 16 02:15:13.102689 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 02:15:13.103016 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 02:15:13.103212 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 02:15:13.104443 extend-filesystems[1542]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 02:15:13.110663 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 16 02:15:13.108893 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 02:15:13.109117 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 02:15:13.138253 update_engine[1530]: I20251216 02:15:13.138002 1530 main.cc:92] Flatcar Update Engine starting Dec 16 02:15:13.141749 jq[1549]: true Dec 16 02:15:13.146646 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 16 02:15:13.155729 tar[1546]: linux-arm64/LICENSE Dec 16 02:15:13.163612 tar[1546]: linux-arm64/helm Dec 16 02:15:13.167486 extend-filesystems[1542]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 02:15:13.167486 extend-filesystems[1542]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 02:15:13.167486 extend-filesystems[1542]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 16 02:15:13.167356 dbus-daemon[1512]: [system] SELinux support is enabled Dec 16 02:15:13.167580 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 02:15:13.178933 update_engine[1530]: I20251216 02:15:13.173365 1530 update_check_scheduler.cc:74] Next update check in 7m26s Dec 16 02:15:13.178959 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Dec 16 02:15:13.177914 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 02:15:13.180673 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 02:15:13.186341 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 02:15:13.186374 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 02:15:13.188843 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 02:15:13.188868 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 02:15:13.191104 systemd[1]: Started update-engine.service - Update Engine. Dec 16 02:15:13.199810 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 02:15:13.205133 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 02:15:13.206209 systemd-logind[1526]: New seat seat0. Dec 16 02:15:13.207477 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 02:15:13.209462 bash[1585]: Updated "/home/core/.ssh/authorized_keys" Dec 16 02:15:13.214685 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 02:15:13.217396 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 02:15:13.266769 locksmithd[1584]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 02:15:13.271241 containerd[1550]: time="2025-12-16T02:15:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 02:15:13.271762 containerd[1550]: time="2025-12-16T02:15:13.271725479Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 02:15:13.282035 containerd[1550]: time="2025-12-16T02:15:13.281893106Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.485µs" Dec 16 02:15:13.282035 containerd[1550]: time="2025-12-16T02:15:13.281924006Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 02:15:13.282035 containerd[1550]: time="2025-12-16T02:15:13.281971308Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 02:15:13.282035 containerd[1550]: time="2025-12-16T02:15:13.281984231Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 02:15:13.282156 containerd[1550]: time="2025-12-16T02:15:13.282116486Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 02:15:13.282156 containerd[1550]: time="2025-12-16T02:15:13.282133178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282230 containerd[1550]: time="2025-12-16T02:15:13.282183048Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282230 containerd[1550]: time="2025-12-16T02:15:13.282195184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282476 containerd[1550]: time="2025-12-16T02:15:13.282422747Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282476 containerd[1550]: time="2025-12-16T02:15:13.282447847Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282476 containerd[1550]: time="2025-12-16T02:15:13.282459031Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282476 containerd[1550]: time="2025-12-16T02:15:13.282466983Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282708 containerd[1550]: time="2025-12-16T02:15:13.282602593Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282708 containerd[1550]: time="2025-12-16T02:15:13.282640451Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282757 containerd[1550]: time="2025-12-16T02:15:13.282731534Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282915 containerd[1550]: time="2025-12-16T02:15:13.282890298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282945 containerd[1550]: time="2025-12-16T02:15:13.282923393Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 02:15:13.282945 containerd[1550]: time="2025-12-16T02:15:13.282934535Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 02:15:13.282979 containerd[1550]: time="2025-12-16T02:15:13.282963860Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 02:15:13.283317 containerd[1550]: time="2025-12-16T02:15:13.283214991Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 02:15:13.283317 containerd[1550]: time="2025-12-16T02:15:13.283289755Z" level=info msg="metadata content store policy set" policy=shared Dec 16 02:15:13.286813 containerd[1550]: time="2025-12-16T02:15:13.286781396Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 02:15:13.286813 containerd[1550]: time="2025-12-16T02:15:13.286827041Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:15:13.286932 containerd[1550]: time="2025-12-16T02:15:13.286902964Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 02:15:13.286932 containerd[1550]: time="2025-12-16T02:15:13.286916840Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 02:15:13.286932 containerd[1550]: time="2025-12-16T02:15:13.286928272Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 02:15:13.286988 containerd[1550]: time="2025-12-16T02:15:13.286939290Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 02:15:13.286988 containerd[1550]: time="2025-12-16T02:15:13.286949852Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 02:15:13.286988 containerd[1550]: time="2025-12-16T02:15:13.286959213Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 02:15:13.286988 containerd[1550]: time="2025-12-16T02:15:13.286969982Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 02:15:13.286988 containerd[1550]: time="2025-12-16T02:15:13.286981704Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 02:15:13.287075 containerd[1550]: time="2025-12-16T02:15:13.286991645Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 02:15:13.287075 containerd[1550]: time="2025-12-16T02:15:13.287001503Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 02:15:13.287075 containerd[1550]: time="2025-12-16T02:15:13.287012065Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 02:15:13.287123 containerd[1550]: time="2025-12-16T02:15:13.287114415Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287224882Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287254373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287273054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287282912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287292729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287301675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287312859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287323918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287334439Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287345374Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287354486Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287377474Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287411770Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287425066Z" level=info msg="Start snapshots syncer" Dec 16 02:15:13.287527 containerd[1550]: time="2025-12-16T02:15:13.287455179Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 02:15:13.288013 containerd[1550]: time="2025-12-16T02:15:13.287716292Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 02:15:13.288013 containerd[1550]: time="2025-12-16T02:15:13.287764339Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.287814789Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.287910470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.287931594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.287942819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.287952056Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.287964523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.287982044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.287993186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.288003044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.288012943Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.288045210Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.288061861Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 02:15:13.288121 containerd[1550]: time="2025-12-16T02:15:13.288070186Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288079754Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288087997Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288102949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288114133Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288221453Z" level=info msg="runtime interface created" Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288229033Z" level=info msg="created NRI interface" Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288237814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288248873Z" level=info msg="Connect containerd service" Dec 16 02:15:13.288347 containerd[1550]: time="2025-12-16T02:15:13.288270204Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 02:15:13.288945 containerd[1550]: time="2025-12-16T02:15:13.288918141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 02:15:13.356138 containerd[1550]: time="2025-12-16T02:15:13.355866838Z" level=info msg="Start subscribing containerd event" Dec 16 02:15:13.356328 containerd[1550]: time="2025-12-16T02:15:13.356309661Z" level=info msg="Start recovering state" Dec 16 02:15:13.356451 containerd[1550]: time="2025-12-16T02:15:13.356434253Z" level=info msg="Start event monitor" Dec 16 02:15:13.356483 containerd[1550]: time="2025-12-16T02:15:13.356457780Z" level=info msg="Start cni network conf syncer for default" Dec 16 02:15:13.356483 containerd[1550]: time="2025-12-16T02:15:13.356475052Z" level=info msg="Start streaming server" Dec 16 02:15:13.356526 containerd[1550]: time="2025-12-16T02:15:13.356486360Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 02:15:13.356526 containerd[1550]: time="2025-12-16T02:15:13.356495970Z" level=info msg="runtime interface starting up..." Dec 16 02:15:13.356526 containerd[1550]: time="2025-12-16T02:15:13.356505993Z" level=info msg="starting plugins..." Dec 16 02:15:13.356577 containerd[1550]: time="2025-12-16T02:15:13.356530928Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 02:15:13.356760 containerd[1550]: time="2025-12-16T02:15:13.356672461Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 02:15:13.356817 containerd[1550]: time="2025-12-16T02:15:13.356802396Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 02:15:13.356987 containerd[1550]: time="2025-12-16T02:15:13.356973628Z" level=info msg="containerd successfully booted in 0.086075s" Dec 16 02:15:13.357156 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 02:15:13.466467 tar[1546]: linux-arm64/README.md Dec 16 02:15:13.482710 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 02:15:13.907747 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 02:15:13.928697 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 02:15:13.931482 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 02:15:13.965799 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 02:15:13.967684 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 02:15:13.970180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 02:15:13.992179 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 02:15:13.995987 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 02:15:13.999117 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 02:15:14.000440 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 02:15:14.772018 systemd-networkd[1468]: eth0: Gained IPv6LL Dec 16 02:15:14.774292 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 02:15:14.776149 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 02:15:14.778742 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 02:15:14.781139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:15:14.797901 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 02:15:14.813831 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 02:15:14.814713 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 02:15:14.818020 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 02:15:14.819930 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 02:15:15.362004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:15:15.363623 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 02:15:15.366979 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:15:15.368837 systemd[1]: Startup finished in 1.465s (kernel) + 5.190s (initrd) + 4.212s (userspace) = 10.869s. Dec 16 02:15:15.730426 kubelet[1649]: E1216 02:15:15.730340 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:15:15.732724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:15:15.732856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:15:15.733200 systemd[1]: kubelet.service: Consumed 747ms CPU time, 256.6M memory peak. Dec 16 02:15:17.349896 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 02:15:17.351158 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:58436.service - OpenSSH per-connection server daemon (10.0.0.1:58436). Dec 16 02:15:17.427528 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 58436 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:15:17.429797 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:15:17.436188 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 02:15:17.437127 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 02:15:17.440811 systemd-logind[1526]: New session 1 of user core. Dec 16 02:15:17.455047 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 02:15:17.459271 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 02:15:17.477093 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:15:17.479473 systemd-logind[1526]: New session 2 of user core. Dec 16 02:15:17.590486 systemd[1668]: Queued start job for default target default.target. Dec 16 02:15:17.614665 systemd[1668]: Created slice app.slice - User Application Slice. Dec 16 02:15:17.614697 systemd[1668]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 02:15:17.614708 systemd[1668]: Reached target paths.target - Paths. Dec 16 02:15:17.614763 systemd[1668]: Reached target timers.target - Timers. Dec 16 02:15:17.616055 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 02:15:17.616917 systemd[1668]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 02:15:17.625855 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 02:15:17.625908 systemd[1668]: Reached target sockets.target - Sockets. Dec 16 02:15:17.626411 systemd[1668]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 02:15:17.626478 systemd[1668]: Reached target basic.target - Basic System. Dec 16 02:15:17.626521 systemd[1668]: Reached target default.target - Main User Target. Dec 16 02:15:17.626547 systemd[1668]: Startup finished in 141ms. Dec 16 02:15:17.626771 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 02:15:17.633805 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 02:15:17.644572 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:58454.service - OpenSSH per-connection server daemon (10.0.0.1:58454). Dec 16 02:15:17.703141 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 58454 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:15:17.704561 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:15:17.709676 systemd-logind[1526]: New session 3 of user core. Dec 16 02:15:17.719804 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 02:15:17.730872 sshd[1686]: Connection closed by 10.0.0.1 port 58454 Dec 16 02:15:17.731229 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Dec 16 02:15:17.744060 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:58454.service: Deactivated successfully. Dec 16 02:15:17.747226 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 02:15:17.748058 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Dec 16 02:15:17.750535 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:58458.service - OpenSSH per-connection server daemon (10.0.0.1:58458). Dec 16 02:15:17.751294 systemd-logind[1526]: Removed session 3. Dec 16 02:15:17.813803 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 58458 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:15:17.814994 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:15:17.818666 systemd-logind[1526]: New session 4 of user core. Dec 16 02:15:17.826777 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 02:15:17.833466 sshd[1696]: Connection closed by 10.0.0.1 port 58458 Dec 16 02:15:17.833364 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Dec 16 02:15:17.846852 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:58458.service: Deactivated successfully. Dec 16 02:15:17.849962 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 02:15:17.851515 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Dec 16 02:15:17.852689 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:58464.service - OpenSSH per-connection server daemon (10.0.0.1:58464). Dec 16 02:15:17.853571 systemd-logind[1526]: Removed session 4. Dec 16 02:15:17.911385 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 58464 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:15:17.912721 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:15:17.916691 systemd-logind[1526]: New session 5 of user core. Dec 16 02:15:17.932930 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 02:15:17.943636 sshd[1706]: Connection closed by 10.0.0.1 port 58464 Dec 16 02:15:17.943863 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Dec 16 02:15:17.956795 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:58464.service: Deactivated successfully. Dec 16 02:15:17.959838 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 02:15:17.960506 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Dec 16 02:15:17.963859 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:58478.service - OpenSSH per-connection server daemon (10.0.0.1:58478). Dec 16 02:15:17.964447 systemd-logind[1526]: Removed session 5. Dec 16 02:15:18.014770 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 58478 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:15:18.015939 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:15:18.019757 systemd-logind[1526]: New session 6 of user core. Dec 16 02:15:18.028797 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 02:15:18.044842 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 02:15:18.045092 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:15:18.055366 sudo[1717]: pam_unix(sudo:session): session closed for user root Dec 16 02:15:18.056632 sshd[1716]: Connection closed by 10.0.0.1 port 58478 Dec 16 02:15:18.057090 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Dec 16 02:15:18.065939 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:58478.service: Deactivated successfully. Dec 16 02:15:18.067398 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 02:15:18.070235 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Dec 16 02:15:18.072566 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:58486.service - OpenSSH per-connection server daemon (10.0.0.1:58486). Dec 16 02:15:18.073236 systemd-logind[1526]: Removed session 6. Dec 16 02:15:18.129595 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 58486 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:15:18.130920 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:15:18.135415 systemd-logind[1526]: New session 7 of user core. Dec 16 02:15:18.151894 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 02:15:18.163547 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 02:15:18.163822 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:15:18.167650 sudo[1730]: pam_unix(sudo:session): session closed for user root Dec 16 02:15:18.173057 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 02:15:18.173304 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:15:18.179989 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 02:15:18.209000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:15:18.210794 kernel: kauditd_printk_skb: 184 callbacks suppressed Dec 16 02:15:18.210831 kernel: audit: type=1305 audit(1765851318.209:227): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 02:15:18.211044 augenrules[1754]: No rules Dec 16 02:15:18.212263 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 02:15:18.209000 audit[1754]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd486d6c0 a2=420 a3=0 items=0 ppid=1735 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:18.212546 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 02:15:18.215754 kernel: audit: type=1300 audit(1765851318.209:227): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd486d6c0 a2=420 a3=0 items=0 ppid=1735 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:18.215788 kernel: audit: type=1327 audit(1765851318.209:227): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:15:18.209000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 02:15:18.216001 sudo[1729]: pam_unix(sudo:session): session closed for user root Dec 16 02:15:18.217500 sshd[1728]: Connection closed by 10.0.0.1 port 58486 Dec 16 02:15:18.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.217778 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Dec 16 02:15:18.220219 kernel: audit: type=1130 audit(1765851318.211:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.222766 kernel: audit: type=1131 audit(1765851318.211:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.215000 audit[1729]: USER_END pid=1729 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.225449 kernel: audit: type=1106 audit(1765851318.215:230): pid=1729 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.225550 kernel: audit: type=1104 audit(1765851318.215:231): pid=1729 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.215000 audit[1729]: CRED_DISP pid=1729 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.227923 kernel: audit: type=1106 audit(1765851318.216:232): pid=1724 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:18.216000 audit[1724]: USER_END pid=1724 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:18.227854 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:58486.service: Deactivated successfully. Dec 16 02:15:18.229297 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 02:15:18.230105 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Dec 16 02:15:18.216000 audit[1724]: CRED_DISP pid=1724 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:18.232500 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:58488.service - OpenSSH per-connection server daemon (10.0.0.1:58488). Dec 16 02:15:18.234063 systemd-logind[1526]: Removed session 7. Dec 16 02:15:18.234500 kernel: audit: type=1104 audit(1765851318.216:233): pid=1724 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:18.234536 kernel: audit: type=1131 audit(1765851318.227:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.94:22-10.0.0.1:58486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.94:22-10.0.0.1:58486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-10.0.0.1:58488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.296000 audit[1763]: USER_ACCT pid=1763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:18.298693 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 58488 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:15:18.297000 audit[1763]: CRED_ACQ pid=1763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:18.297000 audit[1763]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe1c5f800 a2=3 a3=0 items=0 ppid=1 pid=1763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:18.297000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:15:18.299327 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:15:18.303552 systemd-logind[1526]: New session 8 of user core. Dec 16 02:15:18.310780 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 02:15:18.313000 audit[1763]: USER_START pid=1763 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:18.315000 audit[1767]: CRED_ACQ pid=1767 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:18.322000 audit[1768]: USER_ACCT pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.323321 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 02:15:18.322000 audit[1768]: CRED_REFR pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.322000 audit[1768]: USER_START pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:18.323581 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 02:15:18.603627 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 02:15:18.621902 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 02:15:18.826261 dockerd[1790]: time="2025-12-16T02:15:18.826191565Z" level=info msg="Starting up" Dec 16 02:15:18.827586 dockerd[1790]: time="2025-12-16T02:15:18.827561507Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 02:15:18.837414 dockerd[1790]: time="2025-12-16T02:15:18.837353927Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 02:15:19.018326 dockerd[1790]: time="2025-12-16T02:15:19.018218007Z" level=info msg="Loading containers: start." Dec 16 02:15:19.028630 kernel: Initializing XFRM netlink socket Dec 16 02:15:19.067000 audit[1842]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.067000 audit[1842]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffeb02ebe0 a2=0 a3=0 items=0 ppid=1790 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.067000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:15:19.068000 audit[1844]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.068000 audit[1844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffdde32310 a2=0 a3=0 items=0 ppid=1790 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:15:19.069000 audit[1846]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.069000 audit[1846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2a124d0 a2=0 a3=0 items=0 ppid=1790 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.069000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:15:19.071000 audit[1848]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.071000 audit[1848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd87d0690 a2=0 a3=0 items=0 ppid=1790 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:15:19.073000 audit[1850]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.073000 audit[1850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe9e1db0 a2=0 a3=0 items=0 ppid=1790 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.073000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:15:19.075000 audit[1852]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.075000 audit[1852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcabc1f80 a2=0 a3=0 items=0 ppid=1790 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.075000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:15:19.077000 audit[1854]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.077000 audit[1854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeab2e990 a2=0 a3=0 items=0 ppid=1790 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:15:19.079000 audit[1856]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.079000 audit[1856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff8284470 a2=0 a3=0 items=0 ppid=1790 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.079000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:15:19.107000 audit[1859]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.107000 audit[1859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffed6f0850 a2=0 a3=0 items=0 ppid=1790 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.107000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 02:15:19.109000 audit[1861]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.109000 audit[1861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd9b60d50 a2=0 a3=0 items=0 ppid=1790 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:15:19.111000 audit[1863]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.111000 audit[1863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff5296f10 a2=0 a3=0 items=0 ppid=1790 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.111000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:15:19.112000 audit[1865]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.112000 audit[1865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd9e1a2f0 a2=0 a3=0 items=0 ppid=1790 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.112000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:15:19.114000 audit[1867]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.114000 audit[1867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcbda6960 a2=0 a3=0 items=0 ppid=1790 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:15:19.145000 audit[1897]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.145000 audit[1897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffea230b0 a2=0 a3=0 items=0 ppid=1790 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.145000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 02:15:19.147000 audit[1899]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.147000 audit[1899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe1f1b260 a2=0 a3=0 items=0 ppid=1790 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 02:15:19.149000 audit[1901]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.149000 audit[1901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe37926a0 a2=0 a3=0 items=0 ppid=1790 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 02:15:19.151000 audit[1903]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.151000 audit[1903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4405300 a2=0 a3=0 items=0 ppid=1790 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 02:15:19.153000 audit[1905]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.153000 audit[1905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe989f9c0 a2=0 a3=0 items=0 ppid=1790 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 02:15:19.155000 audit[1907]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.155000 audit[1907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffed879900 a2=0 a3=0 items=0 ppid=1790 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:15:19.158000 audit[1909]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.158000 audit[1909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc478bae0 a2=0 a3=0 items=0 ppid=1790 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:15:19.160000 audit[1911]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.160000 audit[1911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffcca76a10 a2=0 a3=0 items=0 ppid=1790 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 02:15:19.163000 audit[1914]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.163000 audit[1914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffcad19c00 a2=0 a3=0 items=0 ppid=1790 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.163000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 02:15:19.165000 audit[1916]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.165000 audit[1916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff3dd79c0 a2=0 a3=0 items=0 ppid=1790 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 02:15:19.167000 audit[1918]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.167000 audit[1918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffff179a80 a2=0 a3=0 items=0 ppid=1790 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 02:15:19.169000 audit[1920]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.169000 audit[1920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffde318cb0 a2=0 a3=0 items=0 ppid=1790 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 02:15:19.171000 audit[1922]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.171000 audit[1922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffde613440 a2=0 a3=0 items=0 ppid=1790 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 02:15:19.176000 audit[1927]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.176000 audit[1927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe30953b0 a2=0 a3=0 items=0 ppid=1790 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:15:19.178000 audit[1929]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.178000 audit[1929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd4804ef0 a2=0 a3=0 items=0 ppid=1790 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.178000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:15:19.180000 audit[1931]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.180000 audit[1931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffb2d9eb0 a2=0 a3=0 items=0 ppid=1790 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.180000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:15:19.181000 audit[1933]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.181000 audit[1933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc912a050 a2=0 a3=0 items=0 ppid=1790 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 02:15:19.183000 audit[1935]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.183000 audit[1935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc08904f0 a2=0 a3=0 items=0 ppid=1790 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 02:15:19.185000 audit[1937]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:19.185000 audit[1937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff0705970 a2=0 a3=0 items=0 ppid=1790 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 02:15:19.196000 audit[1942]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.196000 audit[1942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffc56c14a0 a2=0 a3=0 items=0 ppid=1790 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 02:15:19.200000 audit[1944]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1944 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.200000 audit[1944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd212aa50 a2=0 a3=0 items=0 ppid=1790 pid=1944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.200000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 02:15:19.207000 audit[1952]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.207000 audit[1952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff2c24eb0 a2=0 a3=0 items=0 ppid=1790 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 02:15:19.216000 audit[1958]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.216000 audit[1958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcff00bd0 a2=0 a3=0 items=0 ppid=1790 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 02:15:19.218000 audit[1960]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.218000 audit[1960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd3da70a0 a2=0 a3=0 items=0 ppid=1790 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 02:15:19.219000 audit[1962]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.219000 audit[1962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc9341eb0 a2=0 a3=0 items=0 ppid=1790 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.219000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 02:15:19.222000 audit[1964]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.222000 audit[1964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd7b3f1e0 a2=0 a3=0 items=0 ppid=1790 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.222000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 02:15:19.224000 audit[1966]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:19.224000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdbce8060 a2=0 a3=0 items=0 ppid=1790 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:19.224000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 02:15:19.225705 systemd-networkd[1468]: docker0: Link UP Dec 16 02:15:19.229632 dockerd[1790]: time="2025-12-16T02:15:19.229587074Z" level=info msg="Loading containers: done." Dec 16 02:15:19.248142 dockerd[1790]: time="2025-12-16T02:15:19.248088951Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 02:15:19.248278 dockerd[1790]: time="2025-12-16T02:15:19.248174656Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 02:15:19.248343 dockerd[1790]: time="2025-12-16T02:15:19.248314042Z" level=info msg="Initializing buildkit" Dec 16 02:15:19.269343 dockerd[1790]: time="2025-12-16T02:15:19.269242784Z" level=info msg="Completed buildkit initialization" Dec 16 02:15:19.277400 dockerd[1790]: time="2025-12-16T02:15:19.277157199Z" level=info msg="Daemon has completed initialization" Dec 16 02:15:19.277400 dockerd[1790]: time="2025-12-16T02:15:19.277227867Z" level=info msg="API listen on /run/docker.sock" Dec 16 02:15:19.277463 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 02:15:19.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:19.895475 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2761816277-merged.mount: Deactivated successfully. Dec 16 02:15:19.934411 containerd[1550]: time="2025-12-16T02:15:19.934365983Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 02:15:20.815305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835203196.mount: Deactivated successfully. Dec 16 02:15:21.793819 containerd[1550]: time="2025-12-16T02:15:21.793762657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:21.795308 containerd[1550]: time="2025-12-16T02:15:21.795256826Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=24835766" Dec 16 02:15:21.796206 containerd[1550]: time="2025-12-16T02:15:21.796174325Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:21.798642 containerd[1550]: time="2025-12-16T02:15:21.798609556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:21.799648 containerd[1550]: time="2025-12-16T02:15:21.799609245Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.865187856s" Dec 16 02:15:21.799682 containerd[1550]: time="2025-12-16T02:15:21.799646656Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 02:15:21.800214 containerd[1550]: time="2025-12-16T02:15:21.800191706Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 02:15:22.997485 containerd[1550]: time="2025-12-16T02:15:22.997432043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:22.999063 containerd[1550]: time="2025-12-16T02:15:22.999027980Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 16 02:15:23.000026 containerd[1550]: time="2025-12-16T02:15:22.999993013Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:23.003088 containerd[1550]: time="2025-12-16T02:15:23.003053808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:23.004798 containerd[1550]: time="2025-12-16T02:15:23.004761249Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.204541666s" Dec 16 02:15:23.004798 containerd[1550]: time="2025-12-16T02:15:23.004793467Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 02:15:23.005453 containerd[1550]: time="2025-12-16T02:15:23.005377916Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 02:15:24.378566 containerd[1550]: time="2025-12-16T02:15:24.378507969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:24.379161 containerd[1550]: time="2025-12-16T02:15:24.379110413Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=0" Dec 16 02:15:24.380290 containerd[1550]: time="2025-12-16T02:15:24.380249124Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:24.383807 containerd[1550]: time="2025-12-16T02:15:24.383772205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:24.385446 containerd[1550]: time="2025-12-16T02:15:24.385404962Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.379973763s" Dec 16 02:15:24.385446 containerd[1550]: time="2025-12-16T02:15:24.385440893Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 02:15:24.386034 containerd[1550]: time="2025-12-16T02:15:24.385912274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 02:15:25.572633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118109618.mount: Deactivated successfully. Dec 16 02:15:25.813800 containerd[1550]: time="2025-12-16T02:15:25.813738659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:25.815255 containerd[1550]: time="2025-12-16T02:15:25.815204467Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=17716804" Dec 16 02:15:25.816054 containerd[1550]: time="2025-12-16T02:15:25.816003380Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:25.817971 containerd[1550]: time="2025-12-16T02:15:25.817931430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:25.818718 containerd[1550]: time="2025-12-16T02:15:25.818683007Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.432638149s" Dec 16 02:15:25.818748 containerd[1550]: time="2025-12-16T02:15:25.818715961Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 02:15:25.819185 containerd[1550]: time="2025-12-16T02:15:25.819166198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 02:15:25.983253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 02:15:25.985058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:15:26.124638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:15:26.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:26.128450 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 02:15:26.128506 kernel: audit: type=1130 audit(1765851326.123:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:26.129150 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 02:15:26.300787 kubelet[2091]: E1216 02:15:26.300654 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 02:15:26.303990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 02:15:26.304119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 02:15:26.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:15:26.305839 systemd[1]: kubelet.service: Consumed 149ms CPU time, 107.9M memory peak. Dec 16 02:15:26.308617 kernel: audit: type=1131 audit(1765851326.304:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:15:27.022480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3401997531.mount: Deactivated successfully. Dec 16 02:15:27.733923 containerd[1550]: time="2025-12-16T02:15:27.733854335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:27.734833 containerd[1550]: time="2025-12-16T02:15:27.734735453Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15958291" Dec 16 02:15:27.735615 containerd[1550]: time="2025-12-16T02:15:27.735532272Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:27.738091 containerd[1550]: time="2025-12-16T02:15:27.738038239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:27.739109 containerd[1550]: time="2025-12-16T02:15:27.739080112Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.919884964s" Dec 16 02:15:27.739170 containerd[1550]: time="2025-12-16T02:15:27.739115103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 02:15:27.739618 containerd[1550]: time="2025-12-16T02:15:27.739544521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 02:15:28.398050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618335086.mount: Deactivated successfully. Dec 16 02:15:28.402800 containerd[1550]: time="2025-12-16T02:15:28.402752577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:15:28.403983 containerd[1550]: time="2025-12-16T02:15:28.403924445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 02:15:28.405038 containerd[1550]: time="2025-12-16T02:15:28.404991251Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:15:28.407004 containerd[1550]: time="2025-12-16T02:15:28.406967313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 02:15:28.407580 containerd[1550]: time="2025-12-16T02:15:28.407542898Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 667.91903ms" Dec 16 02:15:28.407580 containerd[1550]: time="2025-12-16T02:15:28.407571675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 02:15:28.408072 containerd[1550]: time="2025-12-16T02:15:28.408028533Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 02:15:29.372893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971979954.mount: Deactivated successfully. Dec 16 02:15:30.640074 containerd[1550]: time="2025-12-16T02:15:30.639986508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:30.640742 containerd[1550]: time="2025-12-16T02:15:30.640688554Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Dec 16 02:15:30.641572 containerd[1550]: time="2025-12-16T02:15:30.641538099Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:30.644864 containerd[1550]: time="2025-12-16T02:15:30.644835068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:30.645806 containerd[1550]: time="2025-12-16T02:15:30.645772775Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.237705988s" Dec 16 02:15:30.645806 containerd[1550]: time="2025-12-16T02:15:30.645805013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 02:15:36.380486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 02:15:36.382579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:15:36.397159 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 02:15:36.397232 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 02:15:36.398643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:15:36.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:15:36.400889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:15:36.401725 kernel: audit: type=1130 audit(1765851336.398:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:15:36.425197 systemd[1]: Reload requested from client PID 2239 ('systemctl') (unit session-8.scope)... Dec 16 02:15:36.425214 systemd[1]: Reloading... Dec 16 02:15:36.499664 zram_generator::config[2285]: No configuration found. Dec 16 02:15:36.828673 systemd[1]: Reloading finished in 403 ms. Dec 16 02:15:36.851000 audit: BPF prog-id=63 op=LOAD Dec 16 02:15:36.851000 audit: BPF prog-id=60 op=UNLOAD Dec 16 02:15:36.853602 kernel: audit: type=1334 audit(1765851336.851:288): prog-id=63 op=LOAD Dec 16 02:15:36.853654 kernel: audit: type=1334 audit(1765851336.851:289): prog-id=60 op=UNLOAD Dec 16 02:15:36.853684 kernel: audit: type=1334 audit(1765851336.851:290): prog-id=64 op=LOAD Dec 16 02:15:36.853702 kernel: audit: type=1334 audit(1765851336.852:291): prog-id=65 op=LOAD Dec 16 02:15:36.853716 kernel: audit: type=1334 audit(1765851336.852:292): prog-id=61 op=UNLOAD Dec 16 02:15:36.851000 audit: BPF prog-id=64 op=LOAD Dec 16 02:15:36.852000 audit: BPF prog-id=65 op=LOAD Dec 16 02:15:36.852000 audit: BPF prog-id=61 op=UNLOAD Dec 16 02:15:36.852000 audit: BPF prog-id=62 op=UNLOAD Dec 16 02:15:36.854000 audit: BPF prog-id=66 op=LOAD Dec 16 02:15:36.857126 kernel: audit: type=1334 audit(1765851336.852:293): prog-id=62 op=UNLOAD Dec 16 02:15:36.857164 kernel: audit: type=1334 audit(1765851336.854:294): prog-id=66 op=LOAD Dec 16 02:15:36.857181 kernel: audit: type=1334 audit(1765851336.854:295): prog-id=57 op=UNLOAD Dec 16 02:15:36.854000 audit: BPF prog-id=57 op=UNLOAD Dec 16 02:15:36.857933 kernel: audit: type=1334 audit(1765851336.855:296): prog-id=67 op=LOAD Dec 16 02:15:36.855000 audit: BPF prog-id=67 op=LOAD Dec 16 02:15:36.861000 audit: BPF prog-id=68 op=LOAD Dec 16 02:15:36.861000 audit: BPF prog-id=55 op=UNLOAD Dec 16 02:15:36.861000 audit: BPF prog-id=56 op=UNLOAD Dec 16 02:15:36.862000 audit: BPF prog-id=69 op=LOAD Dec 16 02:15:36.862000 audit: BPF prog-id=46 op=UNLOAD Dec 16 02:15:36.862000 audit: BPF prog-id=70 op=LOAD Dec 16 02:15:36.862000 audit: BPF prog-id=71 op=LOAD Dec 16 02:15:36.862000 audit: BPF prog-id=47 op=UNLOAD Dec 16 02:15:36.862000 audit: BPF prog-id=48 op=UNLOAD Dec 16 02:15:36.863000 audit: BPF prog-id=72 op=LOAD Dec 16 02:15:36.863000 audit: BPF prog-id=59 op=UNLOAD Dec 16 02:15:36.863000 audit: BPF prog-id=73 op=LOAD Dec 16 02:15:36.863000 audit: BPF prog-id=58 op=UNLOAD Dec 16 02:15:36.864000 audit: BPF prog-id=74 op=LOAD Dec 16 02:15:36.864000 audit: BPF prog-id=43 op=UNLOAD Dec 16 02:15:36.864000 audit: BPF prog-id=75 op=LOAD Dec 16 02:15:36.864000 audit: BPF prog-id=76 op=LOAD Dec 16 02:15:36.864000 audit: BPF prog-id=44 op=UNLOAD Dec 16 02:15:36.864000 audit: BPF prog-id=45 op=UNLOAD Dec 16 02:15:36.864000 audit: BPF prog-id=77 op=LOAD Dec 16 02:15:36.864000 audit: BPF prog-id=52 op=UNLOAD Dec 16 02:15:36.865000 audit: BPF prog-id=78 op=LOAD Dec 16 02:15:36.865000 audit: BPF prog-id=79 op=LOAD Dec 16 02:15:36.865000 audit: BPF prog-id=53 op=UNLOAD Dec 16 02:15:36.865000 audit: BPF prog-id=54 op=UNLOAD Dec 16 02:15:36.865000 audit: BPF prog-id=80 op=LOAD Dec 16 02:15:36.865000 audit: BPF prog-id=49 op=UNLOAD Dec 16 02:15:36.865000 audit: BPF prog-id=81 op=LOAD Dec 16 02:15:36.865000 audit: BPF prog-id=82 op=LOAD Dec 16 02:15:36.865000 audit: BPF prog-id=50 op=UNLOAD Dec 16 02:15:36.865000 audit: BPF prog-id=51 op=UNLOAD Dec 16 02:15:36.883053 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 02:15:36.883130 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 02:15:36.883410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:15:36.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 02:15:36.883467 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Dec 16 02:15:36.885010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:15:37.010506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:15:37.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:37.014098 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:15:37.049654 kubelet[2330]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:15:37.049654 kubelet[2330]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:15:37.049654 kubelet[2330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:15:37.049654 kubelet[2330]: I1216 02:15:37.048741 2330 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:15:38.100370 kubelet[2330]: I1216 02:15:38.100323 2330 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 02:15:38.100370 kubelet[2330]: I1216 02:15:38.100356 2330 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:15:38.100889 kubelet[2330]: I1216 02:15:38.100861 2330 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 02:15:38.125417 kubelet[2330]: E1216 02:15:38.125371 2330 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Dec 16 02:15:38.126940 kubelet[2330]: I1216 02:15:38.126913 2330 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:15:38.131690 kubelet[2330]: I1216 02:15:38.131671 2330 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:15:38.134322 kubelet[2330]: I1216 02:15:38.134303 2330 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 02:15:38.134968 kubelet[2330]: I1216 02:15:38.134931 2330 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:15:38.135124 kubelet[2330]: I1216 02:15:38.134971 2330 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:15:38.135218 kubelet[2330]: I1216 02:15:38.135195 2330 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:15:38.135218 kubelet[2330]: I1216 02:15:38.135203 2330 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 02:15:38.135406 kubelet[2330]: I1216 02:15:38.135388 2330 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:15:38.137864 kubelet[2330]: I1216 02:15:38.137834 2330 kubelet.go:446] "Attempting to sync node with API server" Dec 16 02:15:38.137864 kubelet[2330]: I1216 02:15:38.137863 2330 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:15:38.137928 kubelet[2330]: I1216 02:15:38.137885 2330 kubelet.go:352] "Adding apiserver pod source" Dec 16 02:15:38.137928 kubelet[2330]: I1216 02:15:38.137902 2330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:15:38.140347 kubelet[2330]: W1216 02:15:38.140300 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Dec 16 02:15:38.140424 kubelet[2330]: E1216 02:15:38.140366 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Dec 16 02:15:38.140667 kubelet[2330]: W1216 02:15:38.140601 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Dec 16 02:15:38.140667 kubelet[2330]: E1216 02:15:38.140646 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Dec 16 02:15:38.140842 kubelet[2330]: I1216 02:15:38.140821 2330 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:15:38.141568 kubelet[2330]: I1216 02:15:38.141545 2330 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 02:15:38.141686 kubelet[2330]: W1216 02:15:38.141673 2330 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 02:15:38.143207 kubelet[2330]: I1216 02:15:38.142658 2330 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 02:15:38.143207 kubelet[2330]: I1216 02:15:38.142737 2330 server.go:1287] "Started kubelet" Dec 16 02:15:38.145029 kubelet[2330]: I1216 02:15:38.144973 2330 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:15:38.145276 kubelet[2330]: I1216 02:15:38.145255 2330 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:15:38.145320 kubelet[2330]: I1216 02:15:38.145277 2330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:15:38.145354 kubelet[2330]: I1216 02:15:38.145332 2330 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:15:38.146146 kubelet[2330]: E1216 02:15:38.145903 2330 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881906fa824f72a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 02:15:38.142709546 +0000 UTC m=+1.125612398,LastTimestamp:2025-12-16 02:15:38.142709546 +0000 UTC m=+1.125612398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 02:15:38.146424 kubelet[2330]: I1216 02:15:38.146334 2330 server.go:479] "Adding debug handlers to kubelet server" Dec 16 02:15:38.147531 kubelet[2330]: I1216 02:15:38.147501 2330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:15:38.147817 kubelet[2330]: I1216 02:15:38.147801 2330 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 02:15:38.148229 kubelet[2330]: E1216 02:15:38.148210 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 02:15:38.148676 kubelet[2330]: I1216 02:15:38.148660 2330 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 02:15:38.148736 kubelet[2330]: I1216 02:15:38.148725 2330 reconciler.go:26] "Reconciler: start to sync state" Dec 16 02:15:38.148000 audit[2343]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:38.148000 audit[2343]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcd7c4610 a2=0 a3=0 items=0 ppid=2330 pid=2343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.150323 kubelet[2330]: W1216 02:15:38.149606 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Dec 16 02:15:38.150323 kubelet[2330]: E1216 02:15:38.149648 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Dec 16 02:15:38.150323 kubelet[2330]: E1216 02:15:38.149668 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Dec 16 02:15:38.150323 kubelet[2330]: I1216 02:15:38.149849 2330 factory.go:221] Registration of the systemd container factory successfully Dec 16 02:15:38.150323 kubelet[2330]: I1216 02:15:38.149912 2330 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:15:38.148000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:15:38.150819 kubelet[2330]: E1216 02:15:38.150733 2330 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:15:38.151370 kubelet[2330]: I1216 02:15:38.151338 2330 factory.go:221] Registration of the containerd container factory successfully Dec 16 02:15:38.151000 audit[2345]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2345 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:38.151000 audit[2345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd8c77ad0 a2=0 a3=0 items=0 ppid=2330 pid=2345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:15:38.153000 audit[2347]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:38.153000 audit[2347]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd5743660 a2=0 a3=0 items=0 ppid=2330 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:15:38.155000 audit[2349]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:38.155000 audit[2349]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff48b26f0 a2=0 a3=0 items=0 ppid=2330 pid=2349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:15:38.160000 audit[2354]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:38.160000 audit[2354]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffeb38c760 a2=0 a3=0 items=0 ppid=2330 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.160000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 02:15:38.161861 kubelet[2330]: I1216 02:15:38.161754 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 02:15:38.161000 audit[2356]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:38.161000 audit[2356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffa306ec0 a2=0 a3=0 items=0 ppid=2330 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 02:15:38.162809 kubelet[2330]: I1216 02:15:38.162791 2330 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 02:15:38.163128 kubelet[2330]: I1216 02:15:38.162847 2330 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 02:15:38.163128 kubelet[2330]: I1216 02:15:38.162867 2330 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:15:38.163128 kubelet[2330]: I1216 02:15:38.162874 2330 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 02:15:38.163128 kubelet[2330]: E1216 02:15:38.162914 2330 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:15:38.162000 audit[2358]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:38.162000 audit[2358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8fe3c70 a2=0 a3=0 items=0 ppid=2330 pid=2358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:15:38.163719 kubelet[2330]: I1216 02:15:38.163661 2330 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:15:38.163719 kubelet[2330]: I1216 02:15:38.163671 2330 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:15:38.163719 kubelet[2330]: I1216 02:15:38.163687 2330 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:15:38.163000 audit[2359]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:38.163000 audit[2359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca40f520 a2=0 a3=0 items=0 ppid=2330 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 02:15:38.163000 audit[2360]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:38.163000 audit[2360]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb075300 a2=0 a3=0 items=0 ppid=2330 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.163000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:15:38.165047 kubelet[2330]: W1216 02:15:38.165020 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Dec 16 02:15:38.165108 kubelet[2330]: E1216 02:15:38.165053 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Dec 16 02:15:38.165000 audit[2363]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:38.165000 audit[2363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed4862e0 a2=0 a3=0 items=0 ppid=2330 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 02:15:38.165000 audit[2362]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:38.165000 audit[2362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee94d020 a2=0 a3=0 items=0 ppid=2330 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.165000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:15:38.166000 audit[2364]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:38.166000 audit[2364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd99b7b20 a2=0 a3=0 items=0 ppid=2330 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 02:15:38.249228 kubelet[2330]: E1216 02:15:38.249191 2330 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 02:15:38.253700 kubelet[2330]: I1216 02:15:38.253677 2330 policy_none.go:49] "None policy: Start" Dec 16 02:15:38.253700 kubelet[2330]: I1216 02:15:38.253700 2330 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 02:15:38.253757 kubelet[2330]: I1216 02:15:38.253714 2330 state_mem.go:35] "Initializing new in-memory state store" Dec 16 02:15:38.259364 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 02:15:38.263229 kubelet[2330]: E1216 02:15:38.263203 2330 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 02:15:38.273909 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 02:15:38.276931 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 02:15:38.286260 kubelet[2330]: I1216 02:15:38.286236 2330 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 02:15:38.286522 kubelet[2330]: I1216 02:15:38.286504 2330 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:15:38.286627 kubelet[2330]: I1216 02:15:38.286576 2330 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:15:38.286864 kubelet[2330]: I1216 02:15:38.286848 2330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:15:38.287837 kubelet[2330]: E1216 02:15:38.287818 2330 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:15:38.288088 kubelet[2330]: E1216 02:15:38.288071 2330 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 02:15:38.350246 kubelet[2330]: E1216 02:15:38.350203 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Dec 16 02:15:38.388722 kubelet[2330]: I1216 02:15:38.388570 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 02:15:38.389118 kubelet[2330]: E1216 02:15:38.389083 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Dec 16 02:15:38.471388 systemd[1]: Created slice kubepods-burstable-podb832cfa960cce4bdde34b5758a30aa47.slice - libcontainer container kubepods-burstable-podb832cfa960cce4bdde34b5758a30aa47.slice. Dec 16 02:15:38.489350 kubelet[2330]: E1216 02:15:38.489165 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 02:15:38.492650 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 16 02:15:38.506983 kubelet[2330]: E1216 02:15:38.506950 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 02:15:38.509407 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 16 02:15:38.510958 kubelet[2330]: E1216 02:15:38.510808 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 02:15:38.551349 kubelet[2330]: I1216 02:15:38.551308 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:38.551487 kubelet[2330]: I1216 02:15:38.551474 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b832cfa960cce4bdde34b5758a30aa47-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b832cfa960cce4bdde34b5758a30aa47\") " pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:38.551608 kubelet[2330]: I1216 02:15:38.551560 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b832cfa960cce4bdde34b5758a30aa47-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b832cfa960cce4bdde34b5758a30aa47\") " pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:38.551754 kubelet[2330]: I1216 02:15:38.551582 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:38.551754 kubelet[2330]: I1216 02:15:38.551721 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:38.551855 kubelet[2330]: I1216 02:15:38.551739 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 02:15:38.551958 kubelet[2330]: I1216 02:15:38.551901 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b832cfa960cce4bdde34b5758a30aa47-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b832cfa960cce4bdde34b5758a30aa47\") " pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:38.551958 kubelet[2330]: I1216 02:15:38.551924 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:38.552110 kubelet[2330]: I1216 02:15:38.552063 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:38.591441 kubelet[2330]: I1216 02:15:38.591379 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 02:15:38.591957 kubelet[2330]: E1216 02:15:38.591909 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Dec 16 02:15:38.751546 kubelet[2330]: E1216 02:15:38.751435 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Dec 16 02:15:38.790199 kubelet[2330]: E1216 02:15:38.790167 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:38.790931 containerd[1550]: time="2025-12-16T02:15:38.790750004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b832cfa960cce4bdde34b5758a30aa47,Namespace:kube-system,Attempt:0,}" Dec 16 02:15:38.807831 containerd[1550]: time="2025-12-16T02:15:38.807797223Z" level=info msg="connecting to shim 10e8aed512e1d5fd1a579c68609fe3e82a1acb10736a74d39006310d57cb0e87" address="unix:///run/containerd/s/7ef8abce5ca5c3c31bb833b0449c98d30fc01457c00768290a14318778f75a58" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:15:38.808072 kubelet[2330]: E1216 02:15:38.807969 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:38.808603 containerd[1550]: time="2025-12-16T02:15:38.808411596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 16 02:15:38.811639 kubelet[2330]: E1216 02:15:38.811617 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:38.812041 containerd[1550]: time="2025-12-16T02:15:38.812010246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 16 02:15:38.835795 systemd[1]: Started cri-containerd-10e8aed512e1d5fd1a579c68609fe3e82a1acb10736a74d39006310d57cb0e87.scope - libcontainer container 10e8aed512e1d5fd1a579c68609fe3e82a1acb10736a74d39006310d57cb0e87. Dec 16 02:15:38.841239 containerd[1550]: time="2025-12-16T02:15:38.841195184Z" level=info msg="connecting to shim eaa0305fe90098900014669129faed990acd63a8b7b7462d018055cf4f1302ac" address="unix:///run/containerd/s/43e917178a327f0fc0b73599ba4024f8df96d7eeed6620a9d71f42fe83e21d5a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:15:38.842168 containerd[1550]: time="2025-12-16T02:15:38.842132844Z" level=info msg="connecting to shim d1b33935a7f4c18e65a71a8d4aa6233614f6ad23f10dc4846ced0d51302a4e31" address="unix:///run/containerd/s/dd839025d298ba5c1be890eb427d68d1a89548f750c5e007645c04f3dbafd463" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:15:38.854000 audit: BPF prog-id=83 op=LOAD Dec 16 02:15:38.854000 audit: BPF prog-id=84 op=LOAD Dec 16 02:15:38.854000 audit[2384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2373 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130653861656435313265316435666431613537396336383630396665 Dec 16 02:15:38.854000 audit: BPF prog-id=84 op=UNLOAD Dec 16 02:15:38.854000 audit[2384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2373 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130653861656435313265316435666431613537396336383630396665 Dec 16 02:15:38.854000 audit: BPF prog-id=85 op=LOAD Dec 16 02:15:38.854000 audit[2384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2373 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130653861656435313265316435666431613537396336383630396665 Dec 16 02:15:38.854000 audit: BPF prog-id=86 op=LOAD Dec 16 02:15:38.854000 audit[2384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2373 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130653861656435313265316435666431613537396336383630396665 Dec 16 02:15:38.854000 audit: BPF prog-id=86 op=UNLOAD Dec 16 02:15:38.854000 audit[2384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2373 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130653861656435313265316435666431613537396336383630396665 Dec 16 02:15:38.854000 audit: BPF prog-id=85 op=UNLOAD Dec 16 02:15:38.854000 audit[2384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2373 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130653861656435313265316435666431613537396336383630396665 Dec 16 02:15:38.854000 audit: BPF prog-id=87 op=LOAD Dec 16 02:15:38.854000 audit[2384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2373 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130653861656435313265316435666431613537396336383630396665 Dec 16 02:15:38.866785 systemd[1]: Started cri-containerd-eaa0305fe90098900014669129faed990acd63a8b7b7462d018055cf4f1302ac.scope - libcontainer container eaa0305fe90098900014669129faed990acd63a8b7b7462d018055cf4f1302ac. Dec 16 02:15:38.869734 systemd[1]: Started cri-containerd-d1b33935a7f4c18e65a71a8d4aa6233614f6ad23f10dc4846ced0d51302a4e31.scope - libcontainer container d1b33935a7f4c18e65a71a8d4aa6233614f6ad23f10dc4846ced0d51302a4e31. Dec 16 02:15:38.881000 audit: BPF prog-id=88 op=LOAD Dec 16 02:15:38.881000 audit: BPF prog-id=89 op=LOAD Dec 16 02:15:38.881000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2411 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613033303566653930303938393030303134363639313239666165 Dec 16 02:15:38.881000 audit: BPF prog-id=89 op=UNLOAD Dec 16 02:15:38.881000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2411 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.881000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613033303566653930303938393030303134363639313239666165 Dec 16 02:15:38.882000 audit: BPF prog-id=90 op=LOAD Dec 16 02:15:38.882000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2411 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613033303566653930303938393030303134363639313239666165 Dec 16 02:15:38.882000 audit: BPF prog-id=91 op=LOAD Dec 16 02:15:38.882000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2411 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613033303566653930303938393030303134363639313239666165 Dec 16 02:15:38.882000 audit: BPF prog-id=91 op=UNLOAD Dec 16 02:15:38.882000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2411 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613033303566653930303938393030303134363639313239666165 Dec 16 02:15:38.882000 audit: BPF prog-id=90 op=UNLOAD Dec 16 02:15:38.882000 audit[2441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2411 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613033303566653930303938393030303134363639313239666165 Dec 16 02:15:38.882000 audit: BPF prog-id=92 op=LOAD Dec 16 02:15:38.882000 audit[2441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2411 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613033303566653930303938393030303134363639313239666165 Dec 16 02:15:38.884314 containerd[1550]: time="2025-12-16T02:15:38.884279217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b832cfa960cce4bdde34b5758a30aa47,Namespace:kube-system,Attempt:0,} returns sandbox id \"10e8aed512e1d5fd1a579c68609fe3e82a1acb10736a74d39006310d57cb0e87\"" Dec 16 02:15:38.884000 audit: BPF prog-id=93 op=LOAD Dec 16 02:15:38.884000 audit: BPF prog-id=94 op=LOAD Dec 16 02:15:38.884000 audit[2443]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2419 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623333393335613766346331386536356137316138643461613632 Dec 16 02:15:38.884000 audit: BPF prog-id=94 op=UNLOAD Dec 16 02:15:38.884000 audit[2443]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2419 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623333393335613766346331386536356137316138643461613632 Dec 16 02:15:38.885742 kubelet[2330]: E1216 02:15:38.885722 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:38.885000 audit: BPF prog-id=95 op=LOAD Dec 16 02:15:38.885000 audit[2443]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2419 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623333393335613766346331386536356137316138643461613632 Dec 16 02:15:38.885000 audit: BPF prog-id=96 op=LOAD Dec 16 02:15:38.885000 audit[2443]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2419 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623333393335613766346331386536356137316138643461613632 Dec 16 02:15:38.886000 audit: BPF prog-id=96 op=UNLOAD Dec 16 02:15:38.886000 audit[2443]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2419 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623333393335613766346331386536356137316138643461613632 Dec 16 02:15:38.886000 audit: BPF prog-id=95 op=UNLOAD Dec 16 02:15:38.886000 audit[2443]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2419 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623333393335613766346331386536356137316138643461613632 Dec 16 02:15:38.886000 audit: BPF prog-id=97 op=LOAD Dec 16 02:15:38.886000 audit[2443]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2419 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431623333393335613766346331386536356137316138643461613632 Dec 16 02:15:38.887797 containerd[1550]: time="2025-12-16T02:15:38.887769050Z" level=info msg="CreateContainer within sandbox \"10e8aed512e1d5fd1a579c68609fe3e82a1acb10736a74d39006310d57cb0e87\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 02:15:38.903081 containerd[1550]: time="2025-12-16T02:15:38.903032143Z" level=info msg="Container c9ffc38eb6cb95f96e0fe582cf026a0b24f24ce8bef975dd1359ef5685f8fd73: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:15:38.909153 containerd[1550]: time="2025-12-16T02:15:38.909118684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaa0305fe90098900014669129faed990acd63a8b7b7462d018055cf4f1302ac\"" Dec 16 02:15:38.910402 kubelet[2330]: E1216 02:15:38.910378 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:38.912096 containerd[1550]: time="2025-12-16T02:15:38.912017253Z" level=info msg="CreateContainer within sandbox \"eaa0305fe90098900014669129faed990acd63a8b7b7462d018055cf4f1302ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 02:15:38.915802 containerd[1550]: time="2025-12-16T02:15:38.915770217Z" level=info msg="CreateContainer within sandbox \"10e8aed512e1d5fd1a579c68609fe3e82a1acb10736a74d39006310d57cb0e87\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9ffc38eb6cb95f96e0fe582cf026a0b24f24ce8bef975dd1359ef5685f8fd73\"" Dec 16 02:15:38.917720 containerd[1550]: time="2025-12-16T02:15:38.917689233Z" level=info msg="StartContainer for \"c9ffc38eb6cb95f96e0fe582cf026a0b24f24ce8bef975dd1359ef5685f8fd73\"" Dec 16 02:15:38.918882 containerd[1550]: time="2025-12-16T02:15:38.918849653Z" level=info msg="connecting to shim c9ffc38eb6cb95f96e0fe582cf026a0b24f24ce8bef975dd1359ef5685f8fd73" address="unix:///run/containerd/s/7ef8abce5ca5c3c31bb833b0449c98d30fc01457c00768290a14318778f75a58" protocol=ttrpc version=3 Dec 16 02:15:38.920286 containerd[1550]: time="2025-12-16T02:15:38.919981037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1b33935a7f4c18e65a71a8d4aa6233614f6ad23f10dc4846ced0d51302a4e31\"" Dec 16 02:15:38.921322 kubelet[2330]: E1216 02:15:38.921281 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:38.921862 containerd[1550]: time="2025-12-16T02:15:38.921834170Z" level=info msg="Container 9459fe316c8b7313e5c57f40564253ef47f488d635f7fa7196bdc06144568d16: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:15:38.922759 containerd[1550]: time="2025-12-16T02:15:38.922724090Z" level=info msg="CreateContainer within sandbox \"d1b33935a7f4c18e65a71a8d4aa6233614f6ad23f10dc4846ced0d51302a4e31\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 02:15:38.931480 containerd[1550]: time="2025-12-16T02:15:38.931439902Z" level=info msg="Container e27f2fe0c74055dde8b46213f0d1d00d0944d5a40776c0f226557ff2c3cfaadb: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:15:38.935774 systemd[1]: Started cri-containerd-c9ffc38eb6cb95f96e0fe582cf026a0b24f24ce8bef975dd1359ef5685f8fd73.scope - libcontainer container c9ffc38eb6cb95f96e0fe582cf026a0b24f24ce8bef975dd1359ef5685f8fd73. Dec 16 02:15:38.938914 containerd[1550]: time="2025-12-16T02:15:38.938873859Z" level=info msg="CreateContainer within sandbox \"eaa0305fe90098900014669129faed990acd63a8b7b7462d018055cf4f1302ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9459fe316c8b7313e5c57f40564253ef47f488d635f7fa7196bdc06144568d16\"" Dec 16 02:15:38.942186 containerd[1550]: time="2025-12-16T02:15:38.939659368Z" level=info msg="StartContainer for \"9459fe316c8b7313e5c57f40564253ef47f488d635f7fa7196bdc06144568d16\"" Dec 16 02:15:38.942186 containerd[1550]: time="2025-12-16T02:15:38.940712894Z" level=info msg="connecting to shim 9459fe316c8b7313e5c57f40564253ef47f488d635f7fa7196bdc06144568d16" address="unix:///run/containerd/s/43e917178a327f0fc0b73599ba4024f8df96d7eeed6620a9d71f42fe83e21d5a" protocol=ttrpc version=3 Dec 16 02:15:38.944870 containerd[1550]: time="2025-12-16T02:15:38.944832800Z" level=info msg="CreateContainer within sandbox \"d1b33935a7f4c18e65a71a8d4aa6233614f6ad23f10dc4846ced0d51302a4e31\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e27f2fe0c74055dde8b46213f0d1d00d0944d5a40776c0f226557ff2c3cfaadb\"" Dec 16 02:15:38.945233 containerd[1550]: time="2025-12-16T02:15:38.945201344Z" level=info msg="StartContainer for \"e27f2fe0c74055dde8b46213f0d1d00d0944d5a40776c0f226557ff2c3cfaadb\"" Dec 16 02:15:38.946205 containerd[1550]: time="2025-12-16T02:15:38.946173568Z" level=info msg="connecting to shim e27f2fe0c74055dde8b46213f0d1d00d0944d5a40776c0f226557ff2c3cfaadb" address="unix:///run/containerd/s/dd839025d298ba5c1be890eb427d68d1a89548f750c5e007645c04f3dbafd463" protocol=ttrpc version=3 Dec 16 02:15:38.952000 audit: BPF prog-id=98 op=LOAD Dec 16 02:15:38.953000 audit: BPF prog-id=99 op=LOAD Dec 16 02:15:38.953000 audit[2500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2373 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666663333865623663623935663936653066653538326366303236 Dec 16 02:15:38.954000 audit: BPF prog-id=99 op=UNLOAD Dec 16 02:15:38.954000 audit[2500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2373 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666663333865623663623935663936653066653538326366303236 Dec 16 02:15:38.954000 audit: BPF prog-id=100 op=LOAD Dec 16 02:15:38.954000 audit[2500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2373 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666663333865623663623935663936653066653538326366303236 Dec 16 02:15:38.954000 audit: BPF prog-id=101 op=LOAD Dec 16 02:15:38.954000 audit[2500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2373 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666663333865623663623935663936653066653538326366303236 Dec 16 02:15:38.954000 audit: BPF prog-id=101 op=UNLOAD Dec 16 02:15:38.954000 audit[2500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2373 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666663333865623663623935663936653066653538326366303236 Dec 16 02:15:38.954000 audit: BPF prog-id=100 op=UNLOAD Dec 16 02:15:38.954000 audit[2500]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2373 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666663333865623663623935663936653066653538326366303236 Dec 16 02:15:38.954000 audit: BPF prog-id=102 op=LOAD Dec 16 02:15:38.954000 audit[2500]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2373 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666663333865623663623935663936653066653538326366303236 Dec 16 02:15:38.967843 systemd[1]: Started cri-containerd-9459fe316c8b7313e5c57f40564253ef47f488d635f7fa7196bdc06144568d16.scope - libcontainer container 9459fe316c8b7313e5c57f40564253ef47f488d635f7fa7196bdc06144568d16. Dec 16 02:15:38.971144 systemd[1]: Started cri-containerd-e27f2fe0c74055dde8b46213f0d1d00d0944d5a40776c0f226557ff2c3cfaadb.scope - libcontainer container e27f2fe0c74055dde8b46213f0d1d00d0944d5a40776c0f226557ff2c3cfaadb. Dec 16 02:15:38.982000 audit: BPF prog-id=103 op=LOAD Dec 16 02:15:38.982000 audit: BPF prog-id=104 op=LOAD Dec 16 02:15:38.982000 audit[2522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2411 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.982000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353966653331366338623733313365356335376634303536343235 Dec 16 02:15:38.983000 audit: BPF prog-id=104 op=UNLOAD Dec 16 02:15:38.983000 audit[2522]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2411 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353966653331366338623733313365356335376634303536343235 Dec 16 02:15:38.983000 audit: BPF prog-id=105 op=LOAD Dec 16 02:15:38.983000 audit[2522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2411 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353966653331366338623733313365356335376634303536343235 Dec 16 02:15:38.983000 audit: BPF prog-id=106 op=LOAD Dec 16 02:15:38.983000 audit[2522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2411 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353966653331366338623733313365356335376634303536343235 Dec 16 02:15:38.983000 audit: BPF prog-id=106 op=UNLOAD Dec 16 02:15:38.983000 audit[2522]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2411 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353966653331366338623733313365356335376634303536343235 Dec 16 02:15:38.983000 audit: BPF prog-id=105 op=UNLOAD Dec 16 02:15:38.983000 audit[2522]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2411 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353966653331366338623733313365356335376634303536343235 Dec 16 02:15:38.984000 audit: BPF prog-id=107 op=LOAD Dec 16 02:15:38.983000 audit: BPF prog-id=108 op=LOAD Dec 16 02:15:38.983000 audit[2522]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2411 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934353966653331366338623733313365356335376634303536343235 Dec 16 02:15:38.985312 kubelet[2330]: W1216 02:15:38.985262 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Dec 16 02:15:38.985450 kubelet[2330]: E1216 02:15:38.985427 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Dec 16 02:15:38.985000 audit: BPF prog-id=109 op=LOAD Dec 16 02:15:38.985000 audit[2529]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2419 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532376632666530633734303535646465386234363231336630643164 Dec 16 02:15:38.985000 audit: BPF prog-id=109 op=UNLOAD Dec 16 02:15:38.985000 audit[2529]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2419 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532376632666530633734303535646465386234363231336630643164 Dec 16 02:15:38.985000 audit: BPF prog-id=110 op=LOAD Dec 16 02:15:38.985000 audit[2529]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2419 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532376632666530633734303535646465386234363231336630643164 Dec 16 02:15:38.985000 audit: BPF prog-id=111 op=LOAD Dec 16 02:15:38.985000 audit[2529]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2419 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532376632666530633734303535646465386234363231336630643164 Dec 16 02:15:38.985000 audit: BPF prog-id=111 op=UNLOAD Dec 16 02:15:38.985000 audit[2529]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2419 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532376632666530633734303535646465386234363231336630643164 Dec 16 02:15:38.985000 audit: BPF prog-id=110 op=UNLOAD Dec 16 02:15:38.985000 audit[2529]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2419 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532376632666530633734303535646465386234363231336630643164 Dec 16 02:15:38.985000 audit: BPF prog-id=112 op=LOAD Dec 16 02:15:38.985000 audit[2529]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2419 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:38.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532376632666530633734303535646465386234363231336630643164 Dec 16 02:15:38.993637 containerd[1550]: time="2025-12-16T02:15:38.993595662Z" level=info msg="StartContainer for \"c9ffc38eb6cb95f96e0fe582cf026a0b24f24ce8bef975dd1359ef5685f8fd73\" returns successfully" Dec 16 02:15:38.994910 kubelet[2330]: I1216 02:15:38.994888 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 02:15:38.995330 kubelet[2330]: E1216 02:15:38.995302 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Dec 16 02:15:39.019088 kubelet[2330]: W1216 02:15:39.018952 2330 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Dec 16 02:15:39.019088 kubelet[2330]: E1216 02:15:39.019020 2330 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Dec 16 02:15:39.027818 containerd[1550]: time="2025-12-16T02:15:39.027780740Z" level=info msg="StartContainer for \"e27f2fe0c74055dde8b46213f0d1d00d0944d5a40776c0f226557ff2c3cfaadb\" returns successfully" Dec 16 02:15:39.033246 containerd[1550]: time="2025-12-16T02:15:39.033214966Z" level=info msg="StartContainer for \"9459fe316c8b7313e5c57f40564253ef47f488d635f7fa7196bdc06144568d16\" returns successfully" Dec 16 02:15:39.170362 kubelet[2330]: E1216 02:15:39.170144 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 02:15:39.172619 kubelet[2330]: E1216 02:15:39.170722 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:39.174378 kubelet[2330]: E1216 02:15:39.174048 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 02:15:39.174378 kubelet[2330]: E1216 02:15:39.174151 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:39.176044 kubelet[2330]: E1216 02:15:39.176021 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 02:15:39.176124 kubelet[2330]: E1216 02:15:39.176112 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:39.796548 kubelet[2330]: I1216 02:15:39.796518 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 02:15:40.178509 kubelet[2330]: E1216 02:15:40.178283 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 02:15:40.179031 kubelet[2330]: E1216 02:15:40.178959 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:40.180064 kubelet[2330]: E1216 02:15:40.180039 2330 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 02:15:40.180166 kubelet[2330]: E1216 02:15:40.180148 2330 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:40.776107 kubelet[2330]: E1216 02:15:40.776057 2330 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 02:15:40.948723 kubelet[2330]: I1216 02:15:40.948530 2330 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 02:15:40.948723 kubelet[2330]: E1216 02:15:40.948569 2330 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 02:15:41.049122 kubelet[2330]: I1216 02:15:41.048775 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:41.055847 kubelet[2330]: E1216 02:15:41.055814 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:41.055847 kubelet[2330]: I1216 02:15:41.055845 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:41.057764 kubelet[2330]: E1216 02:15:41.057659 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:41.057764 kubelet[2330]: I1216 02:15:41.057681 2330 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 02:15:41.059499 kubelet[2330]: E1216 02:15:41.059454 2330 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 16 02:15:41.141074 kubelet[2330]: I1216 02:15:41.141024 2330 apiserver.go:52] "Watching apiserver" Dec 16 02:15:41.149714 kubelet[2330]: I1216 02:15:41.149666 2330 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 02:15:42.957315 systemd[1]: Reload requested from client PID 2604 ('systemctl') (unit session-8.scope)... Dec 16 02:15:42.957903 systemd[1]: Reloading... Dec 16 02:15:43.042624 zram_generator::config[2650]: No configuration found. Dec 16 02:15:43.234672 systemd[1]: Reloading finished in 276 ms. Dec 16 02:15:43.258022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:15:43.277494 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 02:15:43.277796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:15:43.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:43.278658 systemd[1]: kubelet.service: Consumed 1.501s CPU time, 130.3M memory peak. Dec 16 02:15:43.280308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 02:15:43.280736 kernel: kauditd_printk_skb: 201 callbacks suppressed Dec 16 02:15:43.280799 kernel: audit: type=1131 audit(1765851343.276:390): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:43.280000 audit: BPF prog-id=113 op=LOAD Dec 16 02:15:43.280000 audit: BPF prog-id=69 op=UNLOAD Dec 16 02:15:43.281000 audit: BPF prog-id=114 op=LOAD Dec 16 02:15:43.284362 kernel: audit: type=1334 audit(1765851343.280:391): prog-id=113 op=LOAD Dec 16 02:15:43.284390 kernel: audit: type=1334 audit(1765851343.280:392): prog-id=69 op=UNLOAD Dec 16 02:15:43.284409 kernel: audit: type=1334 audit(1765851343.281:393): prog-id=114 op=LOAD Dec 16 02:15:43.284425 kernel: audit: type=1334 audit(1765851343.283:394): prog-id=115 op=LOAD Dec 16 02:15:43.284441 kernel: audit: type=1334 audit(1765851343.283:395): prog-id=70 op=UNLOAD Dec 16 02:15:43.284461 kernel: audit: type=1334 audit(1765851343.283:396): prog-id=71 op=UNLOAD Dec 16 02:15:43.284476 kernel: audit: type=1334 audit(1765851343.283:397): prog-id=116 op=LOAD Dec 16 02:15:43.283000 audit: BPF prog-id=115 op=LOAD Dec 16 02:15:43.283000 audit: BPF prog-id=70 op=UNLOAD Dec 16 02:15:43.283000 audit: BPF prog-id=71 op=UNLOAD Dec 16 02:15:43.283000 audit: BPF prog-id=116 op=LOAD Dec 16 02:15:43.283000 audit: BPF prog-id=72 op=UNLOAD Dec 16 02:15:43.288090 kernel: audit: type=1334 audit(1765851343.283:398): prog-id=72 op=UNLOAD Dec 16 02:15:43.288133 kernel: audit: type=1334 audit(1765851343.285:399): prog-id=117 op=LOAD Dec 16 02:15:43.285000 audit: BPF prog-id=117 op=LOAD Dec 16 02:15:43.285000 audit: BPF prog-id=77 op=UNLOAD Dec 16 02:15:43.286000 audit: BPF prog-id=118 op=LOAD Dec 16 02:15:43.302000 audit: BPF prog-id=119 op=LOAD Dec 16 02:15:43.302000 audit: BPF prog-id=78 op=UNLOAD Dec 16 02:15:43.302000 audit: BPF prog-id=79 op=UNLOAD Dec 16 02:15:43.302000 audit: BPF prog-id=120 op=LOAD Dec 16 02:15:43.302000 audit: BPF prog-id=80 op=UNLOAD Dec 16 02:15:43.302000 audit: BPF prog-id=121 op=LOAD Dec 16 02:15:43.303000 audit: BPF prog-id=122 op=LOAD Dec 16 02:15:43.303000 audit: BPF prog-id=81 op=UNLOAD Dec 16 02:15:43.303000 audit: BPF prog-id=82 op=UNLOAD Dec 16 02:15:43.304000 audit: BPF prog-id=123 op=LOAD Dec 16 02:15:43.304000 audit: BPF prog-id=63 op=UNLOAD Dec 16 02:15:43.304000 audit: BPF prog-id=124 op=LOAD Dec 16 02:15:43.304000 audit: BPF prog-id=125 op=LOAD Dec 16 02:15:43.304000 audit: BPF prog-id=64 op=UNLOAD Dec 16 02:15:43.304000 audit: BPF prog-id=65 op=UNLOAD Dec 16 02:15:43.304000 audit: BPF prog-id=126 op=LOAD Dec 16 02:15:43.304000 audit: BPF prog-id=66 op=UNLOAD Dec 16 02:15:43.305000 audit: BPF prog-id=127 op=LOAD Dec 16 02:15:43.305000 audit: BPF prog-id=73 op=UNLOAD Dec 16 02:15:43.305000 audit: BPF prog-id=128 op=LOAD Dec 16 02:15:43.305000 audit: BPF prog-id=129 op=LOAD Dec 16 02:15:43.305000 audit: BPF prog-id=67 op=UNLOAD Dec 16 02:15:43.305000 audit: BPF prog-id=68 op=UNLOAD Dec 16 02:15:43.306000 audit: BPF prog-id=130 op=LOAD Dec 16 02:15:43.306000 audit: BPF prog-id=74 op=UNLOAD Dec 16 02:15:43.306000 audit: BPF prog-id=131 op=LOAD Dec 16 02:15:43.306000 audit: BPF prog-id=132 op=LOAD Dec 16 02:15:43.306000 audit: BPF prog-id=75 op=UNLOAD Dec 16 02:15:43.306000 audit: BPF prog-id=76 op=UNLOAD Dec 16 02:15:43.429854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 02:15:43.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:43.435535 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 02:15:43.476742 kubelet[2693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:15:43.476742 kubelet[2693]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 02:15:43.476742 kubelet[2693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 02:15:43.477065 kubelet[2693]: I1216 02:15:43.476795 2693 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 02:15:43.482813 kubelet[2693]: I1216 02:15:43.482764 2693 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 02:15:43.482813 kubelet[2693]: I1216 02:15:43.482798 2693 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 02:15:43.483034 kubelet[2693]: I1216 02:15:43.483016 2693 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 02:15:43.484249 kubelet[2693]: I1216 02:15:43.484222 2693 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 02:15:43.486458 kubelet[2693]: I1216 02:15:43.486364 2693 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 02:15:43.490699 kubelet[2693]: I1216 02:15:43.490658 2693 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 02:15:43.494605 kubelet[2693]: I1216 02:15:43.493534 2693 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 02:15:43.494605 kubelet[2693]: I1216 02:15:43.493773 2693 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 02:15:43.494605 kubelet[2693]: I1216 02:15:43.493798 2693 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 02:15:43.494605 kubelet[2693]: I1216 02:15:43.494065 2693 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 02:15:43.494794 kubelet[2693]: I1216 02:15:43.494077 2693 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 02:15:43.494794 kubelet[2693]: I1216 02:15:43.494130 2693 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:15:43.494794 kubelet[2693]: I1216 02:15:43.494248 2693 kubelet.go:446] "Attempting to sync node with API server" Dec 16 02:15:43.494794 kubelet[2693]: I1216 02:15:43.494259 2693 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 02:15:43.494794 kubelet[2693]: I1216 02:15:43.494282 2693 kubelet.go:352] "Adding apiserver pod source" Dec 16 02:15:43.494794 kubelet[2693]: I1216 02:15:43.494296 2693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 02:15:43.495114 kubelet[2693]: I1216 02:15:43.495071 2693 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 02:15:43.495716 kubelet[2693]: I1216 02:15:43.495690 2693 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 02:15:43.496175 kubelet[2693]: I1216 02:15:43.496153 2693 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 02:15:43.496215 kubelet[2693]: I1216 02:15:43.496193 2693 server.go:1287] "Started kubelet" Dec 16 02:15:43.497750 kubelet[2693]: I1216 02:15:43.497708 2693 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 02:15:43.497887 kubelet[2693]: I1216 02:15:43.497836 2693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 02:15:43.498046 kubelet[2693]: I1216 02:15:43.498021 2693 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 02:15:43.498538 kubelet[2693]: I1216 02:15:43.498509 2693 server.go:479] "Adding debug handlers to kubelet server" Dec 16 02:15:43.498842 kubelet[2693]: I1216 02:15:43.498808 2693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 02:15:43.500358 kubelet[2693]: E1216 02:15:43.500318 2693 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 02:15:43.500415 kubelet[2693]: E1216 02:15:43.500399 2693 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 02:15:43.500449 kubelet[2693]: I1216 02:15:43.500424 2693 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 02:15:43.500698 kubelet[2693]: I1216 02:15:43.500583 2693 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 02:15:43.500698 kubelet[2693]: I1216 02:15:43.500690 2693 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 02:15:43.500791 kubelet[2693]: I1216 02:15:43.500775 2693 reconciler.go:26] "Reconciler: start to sync state" Dec 16 02:15:43.501659 kubelet[2693]: I1216 02:15:43.501639 2693 factory.go:221] Registration of the systemd container factory successfully Dec 16 02:15:43.501829 kubelet[2693]: I1216 02:15:43.501808 2693 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 02:15:43.509671 kubelet[2693]: I1216 02:15:43.509640 2693 factory.go:221] Registration of the containerd container factory successfully Dec 16 02:15:43.519016 kubelet[2693]: I1216 02:15:43.518977 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 02:15:43.519809 kubelet[2693]: I1216 02:15:43.519790 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 02:15:43.519846 kubelet[2693]: I1216 02:15:43.519812 2693 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 02:15:43.519846 kubelet[2693]: I1216 02:15:43.519829 2693 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 02:15:43.519846 kubelet[2693]: I1216 02:15:43.519836 2693 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 02:15:43.519905 kubelet[2693]: E1216 02:15:43.519872 2693 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 02:15:43.557301 kubelet[2693]: I1216 02:15:43.557271 2693 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 02:15:43.557301 kubelet[2693]: I1216 02:15:43.557290 2693 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 02:15:43.557301 kubelet[2693]: I1216 02:15:43.557308 2693 state_mem.go:36] "Initialized new in-memory state store" Dec 16 02:15:43.557471 kubelet[2693]: I1216 02:15:43.557460 2693 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 02:15:43.557493 kubelet[2693]: I1216 02:15:43.557470 2693 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 02:15:43.557493 kubelet[2693]: I1216 02:15:43.557489 2693 policy_none.go:49] "None policy: Start" Dec 16 02:15:43.557535 kubelet[2693]: I1216 02:15:43.557496 2693 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 02:15:43.557535 kubelet[2693]: I1216 02:15:43.557505 2693 state_mem.go:35] "Initializing new in-memory state store" Dec 16 02:15:43.557637 kubelet[2693]: I1216 02:15:43.557621 2693 state_mem.go:75] "Updated machine memory state" Dec 16 02:15:43.561214 kubelet[2693]: I1216 02:15:43.561177 2693 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 02:15:43.561486 kubelet[2693]: I1216 02:15:43.561468 2693 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 02:15:43.561576 kubelet[2693]: I1216 02:15:43.561544 2693 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 02:15:43.561798 kubelet[2693]: I1216 02:15:43.561780 2693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 02:15:43.563501 kubelet[2693]: E1216 02:15:43.563458 2693 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 02:15:43.621391 kubelet[2693]: I1216 02:15:43.621342 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 02:15:43.621497 kubelet[2693]: I1216 02:15:43.621416 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:43.621624 kubelet[2693]: I1216 02:15:43.621543 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:43.665469 kubelet[2693]: I1216 02:15:43.665413 2693 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 02:15:43.672398 kubelet[2693]: I1216 02:15:43.672373 2693 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 02:15:43.672474 kubelet[2693]: I1216 02:15:43.672451 2693 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 02:15:43.702187 kubelet[2693]: I1216 02:15:43.702155 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:43.702312 kubelet[2693]: I1216 02:15:43.702294 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b832cfa960cce4bdde34b5758a30aa47-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b832cfa960cce4bdde34b5758a30aa47\") " pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:43.702423 kubelet[2693]: I1216 02:15:43.702360 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b832cfa960cce4bdde34b5758a30aa47-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b832cfa960cce4bdde34b5758a30aa47\") " pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:43.702423 kubelet[2693]: I1216 02:15:43.702381 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:43.702504 kubelet[2693]: I1216 02:15:43.702493 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:43.702698 kubelet[2693]: I1216 02:15:43.702616 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b832cfa960cce4bdde34b5758a30aa47-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b832cfa960cce4bdde34b5758a30aa47\") " pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:43.702698 kubelet[2693]: I1216 02:15:43.702637 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:43.702698 kubelet[2693]: I1216 02:15:43.702654 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 02:15:43.702698 kubelet[2693]: I1216 02:15:43.702669 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 02:15:43.928509 kubelet[2693]: E1216 02:15:43.928377 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:43.928509 kubelet[2693]: E1216 02:15:43.928438 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:43.928509 kubelet[2693]: E1216 02:15:43.928451 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:44.496595 kubelet[2693]: I1216 02:15:44.496539 2693 apiserver.go:52] "Watching apiserver" Dec 16 02:15:44.501513 kubelet[2693]: I1216 02:15:44.501476 2693 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 02:15:44.546772 kubelet[2693]: E1216 02:15:44.546738 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:44.549473 kubelet[2693]: E1216 02:15:44.547733 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:44.549473 kubelet[2693]: I1216 02:15:44.547788 2693 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:44.556287 kubelet[2693]: E1216 02:15:44.554621 2693 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 02:15:44.556287 kubelet[2693]: E1216 02:15:44.554742 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:44.591034 kubelet[2693]: I1216 02:15:44.590837 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.590817232 podStartE2EDuration="1.590817232s" podCreationTimestamp="2025-12-16 02:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:15:44.579999076 +0000 UTC m=+1.141149819" watchObservedRunningTime="2025-12-16 02:15:44.590817232 +0000 UTC m=+1.151967975" Dec 16 02:15:44.600108 kubelet[2693]: I1216 02:15:44.600059 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.600044968 podStartE2EDuration="1.600044968s" podCreationTimestamp="2025-12-16 02:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:15:44.593001386 +0000 UTC m=+1.154152129" watchObservedRunningTime="2025-12-16 02:15:44.600044968 +0000 UTC m=+1.161195711" Dec 16 02:15:44.609699 kubelet[2693]: I1216 02:15:44.609393 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.609380606 podStartE2EDuration="1.609380606s" podCreationTimestamp="2025-12-16 02:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:15:44.600023756 +0000 UTC m=+1.161174499" watchObservedRunningTime="2025-12-16 02:15:44.609380606 +0000 UTC m=+1.170531349" Dec 16 02:15:45.548813 kubelet[2693]: E1216 02:15:45.548432 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:45.550244 kubelet[2693]: E1216 02:15:45.549917 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:46.553206 kubelet[2693]: E1216 02:15:46.553162 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:47.552596 kubelet[2693]: E1216 02:15:47.552565 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:49.525429 kubelet[2693]: I1216 02:15:49.525400 2693 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 02:15:49.525804 containerd[1550]: time="2025-12-16T02:15:49.525651069Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 02:15:49.526556 kubelet[2693]: I1216 02:15:49.526240 2693 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 02:15:50.366046 systemd[1]: Created slice kubepods-besteffort-pode07fa5e8_1a30_4ce9_8564_afe0cb1e9d27.slice - libcontainer container kubepods-besteffort-pode07fa5e8_1a30_4ce9_8564_afe0cb1e9d27.slice. Dec 16 02:15:50.547839 kubelet[2693]: I1216 02:15:50.547775 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27-kube-proxy\") pod \"kube-proxy-62dnb\" (UID: \"e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27\") " pod="kube-system/kube-proxy-62dnb" Dec 16 02:15:50.547839 kubelet[2693]: I1216 02:15:50.547841 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27-xtables-lock\") pod \"kube-proxy-62dnb\" (UID: \"e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27\") " pod="kube-system/kube-proxy-62dnb" Dec 16 02:15:50.548239 kubelet[2693]: I1216 02:15:50.547864 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27-lib-modules\") pod \"kube-proxy-62dnb\" (UID: \"e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27\") " pod="kube-system/kube-proxy-62dnb" Dec 16 02:15:50.548239 kubelet[2693]: I1216 02:15:50.547887 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjhxq\" (UniqueName: \"kubernetes.io/projected/e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27-kube-api-access-hjhxq\") pod \"kube-proxy-62dnb\" (UID: \"e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27\") " pod="kube-system/kube-proxy-62dnb" Dec 16 02:15:50.687500 kubelet[2693]: E1216 02:15:50.687043 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:50.690004 containerd[1550]: time="2025-12-16T02:15:50.689773991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62dnb,Uid:e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27,Namespace:kube-system,Attempt:0,}" Dec 16 02:15:50.692954 systemd[1]: Created slice kubepods-besteffort-podf354353f_0f58_42f5_9430_9033f4403941.slice - libcontainer container kubepods-besteffort-podf354353f_0f58_42f5_9430_9033f4403941.slice. Dec 16 02:15:50.718475 containerd[1550]: time="2025-12-16T02:15:50.718135588Z" level=info msg="connecting to shim 819afa01ba638d8644573be2cc1cfefc8f6f353906f2d4023e5bb728fab831e3" address="unix:///run/containerd/s/eab327f09af611119cd0466e45ee3bfee3802b85cc16a461ad5455f604bba089" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:15:50.744844 systemd[1]: Started cri-containerd-819afa01ba638d8644573be2cc1cfefc8f6f353906f2d4023e5bb728fab831e3.scope - libcontainer container 819afa01ba638d8644573be2cc1cfefc8f6f353906f2d4023e5bb728fab831e3. Dec 16 02:15:50.753000 audit: BPF prog-id=133 op=LOAD Dec 16 02:15:50.755200 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 02:15:50.755248 kernel: audit: type=1334 audit(1765851350.753:432): prog-id=133 op=LOAD Dec 16 02:15:50.755000 audit: BPF prog-id=134 op=LOAD Dec 16 02:15:50.757243 kernel: audit: type=1334 audit(1765851350.755:433): prog-id=134 op=LOAD Dec 16 02:15:50.757290 kernel: audit: type=1300 audit(1765851350.755:433): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.755000 audit[2765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.761623 kernel: audit: type=1327 audit(1765851350.755:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.755000 audit: BPF prog-id=134 op=UNLOAD Dec 16 02:15:50.764663 kernel: audit: type=1334 audit(1765851350.755:434): prog-id=134 op=UNLOAD Dec 16 02:15:50.764710 kernel: audit: type=1300 audit(1765851350.755:434): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.755000 audit[2765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.771180 kernel: audit: type=1327 audit(1765851350.755:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.771267 kernel: audit: type=1334 audit(1765851350.756:435): prog-id=135 op=LOAD Dec 16 02:15:50.756000 audit: BPF prog-id=135 op=LOAD Dec 16 02:15:50.756000 audit[2765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.775715 kernel: audit: type=1300 audit(1765851350.756:435): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.779061 kernel: audit: type=1327 audit(1765851350.756:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.756000 audit: BPF prog-id=136 op=LOAD Dec 16 02:15:50.756000 audit[2765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.760000 audit: BPF prog-id=136 op=UNLOAD Dec 16 02:15:50.760000 audit[2765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.760000 audit: BPF prog-id=135 op=UNLOAD Dec 16 02:15:50.760000 audit[2765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.760000 audit: BPF prog-id=137 op=LOAD Dec 16 02:15:50.760000 audit[2765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2753 pid=2765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831396166613031626136333864383634343537336265326363316366 Dec 16 02:15:50.792597 containerd[1550]: time="2025-12-16T02:15:50.792526085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62dnb,Uid:e07fa5e8-1a30-4ce9-8564-afe0cb1e9d27,Namespace:kube-system,Attempt:0,} returns sandbox id \"819afa01ba638d8644573be2cc1cfefc8f6f353906f2d4023e5bb728fab831e3\"" Dec 16 02:15:50.793644 kubelet[2693]: E1216 02:15:50.793621 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:50.796008 containerd[1550]: time="2025-12-16T02:15:50.795921767Z" level=info msg="CreateContainer within sandbox \"819afa01ba638d8644573be2cc1cfefc8f6f353906f2d4023e5bb728fab831e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 02:15:50.807804 containerd[1550]: time="2025-12-16T02:15:50.807754490Z" level=info msg="Container 2796ef28a3b2aa58a2a40f7d1b15e96b6ac6c06b36543ab1113b375b4a290614: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:15:50.817452 containerd[1550]: time="2025-12-16T02:15:50.817388589Z" level=info msg="CreateContainer within sandbox \"819afa01ba638d8644573be2cc1cfefc8f6f353906f2d4023e5bb728fab831e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2796ef28a3b2aa58a2a40f7d1b15e96b6ac6c06b36543ab1113b375b4a290614\"" Dec 16 02:15:50.818241 containerd[1550]: time="2025-12-16T02:15:50.818197741Z" level=info msg="StartContainer for \"2796ef28a3b2aa58a2a40f7d1b15e96b6ac6c06b36543ab1113b375b4a290614\"" Dec 16 02:15:50.820141 containerd[1550]: time="2025-12-16T02:15:50.820112867Z" level=info msg="connecting to shim 2796ef28a3b2aa58a2a40f7d1b15e96b6ac6c06b36543ab1113b375b4a290614" address="unix:///run/containerd/s/eab327f09af611119cd0466e45ee3bfee3802b85cc16a461ad5455f604bba089" protocol=ttrpc version=3 Dec 16 02:15:50.839809 systemd[1]: Started cri-containerd-2796ef28a3b2aa58a2a40f7d1b15e96b6ac6c06b36543ab1113b375b4a290614.scope - libcontainer container 2796ef28a3b2aa58a2a40f7d1b15e96b6ac6c06b36543ab1113b375b4a290614. Dec 16 02:15:50.850707 kubelet[2693]: I1216 02:15:50.850668 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f354353f-0f58-42f5-9430-9033f4403941-var-lib-calico\") pod \"tigera-operator-7dcd859c48-46sxc\" (UID: \"f354353f-0f58-42f5-9430-9033f4403941\") " pod="tigera-operator/tigera-operator-7dcd859c48-46sxc" Dec 16 02:15:50.850707 kubelet[2693]: I1216 02:15:50.850712 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwk7c\" (UniqueName: \"kubernetes.io/projected/f354353f-0f58-42f5-9430-9033f4403941-kube-api-access-wwk7c\") pod \"tigera-operator-7dcd859c48-46sxc\" (UID: \"f354353f-0f58-42f5-9430-9033f4403941\") " pod="tigera-operator/tigera-operator-7dcd859c48-46sxc" Dec 16 02:15:50.885000 audit: BPF prog-id=138 op=LOAD Dec 16 02:15:50.885000 audit[2790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2753 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237393665663238613362326161353861326134306637643162313565 Dec 16 02:15:50.885000 audit: BPF prog-id=139 op=LOAD Dec 16 02:15:50.885000 audit[2790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2753 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237393665663238613362326161353861326134306637643162313565 Dec 16 02:15:50.885000 audit: BPF prog-id=139 op=UNLOAD Dec 16 02:15:50.885000 audit[2790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237393665663238613362326161353861326134306637643162313565 Dec 16 02:15:50.885000 audit: BPF prog-id=138 op=UNLOAD Dec 16 02:15:50.885000 audit[2790]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2753 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237393665663238613362326161353861326134306637643162313565 Dec 16 02:15:50.885000 audit: BPF prog-id=140 op=LOAD Dec 16 02:15:50.885000 audit[2790]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2753 pid=2790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:50.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237393665663238613362326161353861326134306637643162313565 Dec 16 02:15:50.903050 containerd[1550]: time="2025-12-16T02:15:50.903012560Z" level=info msg="StartContainer for \"2796ef28a3b2aa58a2a40f7d1b15e96b6ac6c06b36543ab1113b375b4a290614\" returns successfully" Dec 16 02:15:50.935840 kubelet[2693]: E1216 02:15:50.935766 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:50.997824 containerd[1550]: time="2025-12-16T02:15:50.997754540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-46sxc,Uid:f354353f-0f58-42f5-9430-9033f4403941,Namespace:tigera-operator,Attempt:0,}" Dec 16 02:15:51.017248 containerd[1550]: time="2025-12-16T02:15:51.016762116Z" level=info msg="connecting to shim c6aea11a748662175a11aceefb2c6428b8a9d40dc8ae165177c6b2cde483c391" address="unix:///run/containerd/s/c9190aa21e198b1aebe2f3a1f373b069e96587e341c63852bc117b35194fad77" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:15:51.044834 systemd[1]: Started cri-containerd-c6aea11a748662175a11aceefb2c6428b8a9d40dc8ae165177c6b2cde483c391.scope - libcontainer container c6aea11a748662175a11aceefb2c6428b8a9d40dc8ae165177c6b2cde483c391. Dec 16 02:15:51.055000 audit: BPF prog-id=141 op=LOAD Dec 16 02:15:51.056000 audit: BPF prog-id=142 op=LOAD Dec 16 02:15:51.056000 audit[2848]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2837 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616561313161373438363632313735613131616365656662326336 Dec 16 02:15:51.056000 audit: BPF prog-id=142 op=UNLOAD Dec 16 02:15:51.056000 audit[2848]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616561313161373438363632313735613131616365656662326336 Dec 16 02:15:51.056000 audit: BPF prog-id=143 op=LOAD Dec 16 02:15:51.056000 audit[2848]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2837 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616561313161373438363632313735613131616365656662326336 Dec 16 02:15:51.056000 audit: BPF prog-id=144 op=LOAD Dec 16 02:15:51.056000 audit[2848]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2837 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616561313161373438363632313735613131616365656662326336 Dec 16 02:15:51.056000 audit: BPF prog-id=144 op=UNLOAD Dec 16 02:15:51.056000 audit[2848]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616561313161373438363632313735613131616365656662326336 Dec 16 02:15:51.056000 audit: BPF prog-id=143 op=UNLOAD Dec 16 02:15:51.056000 audit[2848]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616561313161373438363632313735613131616365656662326336 Dec 16 02:15:51.056000 audit: BPF prog-id=145 op=LOAD Dec 16 02:15:51.056000 audit[2848]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2837 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616561313161373438363632313735613131616365656662326336 Dec 16 02:15:51.069000 audit[2892]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.069000 audit[2892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc91c9fa0 a2=0 a3=1 items=0 ppid=2803 pid=2892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:15:51.071000 audit[2894]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.071000 audit[2894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8364b40 a2=0 a3=1 items=0 ppid=2803 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 02:15:51.073000 audit[2895]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2895 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.073000 audit[2895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb4c6a10 a2=0 a3=1 items=0 ppid=2803 pid=2895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:15:51.075000 audit[2897]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=2897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.075000 audit[2897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde47c270 a2=0 a3=1 items=0 ppid=2803 pid=2897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 02:15:51.076000 audit[2898]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.076000 audit[2898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe282d5c0 a2=0 a3=1 items=0 ppid=2803 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:15:51.077000 audit[2899]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.077000 audit[2899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffabc8a10 a2=0 a3=1 items=0 ppid=2803 pid=2899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 02:15:51.088620 containerd[1550]: time="2025-12-16T02:15:51.087205609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-46sxc,Uid:f354353f-0f58-42f5-9430-9033f4403941,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c6aea11a748662175a11aceefb2c6428b8a9d40dc8ae165177c6b2cde483c391\"" Dec 16 02:15:51.090372 containerd[1550]: time="2025-12-16T02:15:51.090325438Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 02:15:51.177000 audit[2905]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.177000 audit[2905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffd286a40 a2=0 a3=1 items=0 ppid=2803 pid=2905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:15:51.180000 audit[2907]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.180000 audit[2907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe4e15610 a2=0 a3=1 items=0 ppid=2803 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.180000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 02:15:51.184000 audit[2910]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.184000 audit[2910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffecdf9b40 a2=0 a3=1 items=0 ppid=2803 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.184000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 02:15:51.185000 audit[2911]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.185000 audit[2911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc08f7300 a2=0 a3=1 items=0 ppid=2803 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:15:51.187000 audit[2913]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.187000 audit[2913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff2ea1190 a2=0 a3=1 items=0 ppid=2803 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:15:51.188000 audit[2914]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.188000 audit[2914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd316fde0 a2=0 a3=1 items=0 ppid=2803 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:15:51.191000 audit[2916]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.191000 audit[2916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffda474890 a2=0 a3=1 items=0 ppid=2803 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 02:15:51.195000 audit[2919]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.195000 audit[2919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcef57080 a2=0 a3=1 items=0 ppid=2803 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 02:15:51.197000 audit[2920]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.197000 audit[2920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf6aa610 a2=0 a3=1 items=0 ppid=2803 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:15:51.199000 audit[2922]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.199000 audit[2922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe2f92340 a2=0 a3=1 items=0 ppid=2803 pid=2922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:15:51.200000 audit[2923]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.200000 audit[2923]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3f6cc20 a2=0 a3=1 items=0 ppid=2803 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:15:51.203000 audit[2925]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.203000 audit[2925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe0c07bf0 a2=0 a3=1 items=0 ppid=2803 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:15:51.208000 audit[2928]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2928 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.208000 audit[2928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffff28a90 a2=0 a3=1 items=0 ppid=2803 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:15:51.213000 audit[2931]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2931 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.213000 audit[2931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdbe27350 a2=0 a3=1 items=0 ppid=2803 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 02:15:51.214000 audit[2932]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.214000 audit[2932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffea250520 a2=0 a3=1 items=0 ppid=2803 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:15:51.216000 audit[2934]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.216000 audit[2934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe9b5e350 a2=0 a3=1 items=0 ppid=2803 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:15:51.220000 audit[2937]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.220000 audit[2937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffe17ef40 a2=0 a3=1 items=0 ppid=2803 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:15:51.222000 audit[2938]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.222000 audit[2938]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd719aa0 a2=0 a3=1 items=0 ppid=2803 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:15:51.224000 audit[2940]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 02:15:51.224000 audit[2940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffede81f70 a2=0 a3=1 items=0 ppid=2803 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:15:51.242000 audit[2946]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=2946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:15:51.242000 audit[2946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff45216e0 a2=0 a3=1 items=0 ppid=2803 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:15:51.256000 audit[2946]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=2946 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:15:51.256000 audit[2946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff45216e0 a2=0 a3=1 items=0 ppid=2803 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.256000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:15:51.258000 audit[2952]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=2952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.258000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffffb35430 a2=0 a3=1 items=0 ppid=2803 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 02:15:51.260000 audit[2954]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=2954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.260000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc1103b60 a2=0 a3=1 items=0 ppid=2803 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 02:15:51.264000 audit[2957]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=2957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.264000 audit[2957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff045b260 a2=0 a3=1 items=0 ppid=2803 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 02:15:51.265000 audit[2958]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.265000 audit[2958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdef09a80 a2=0 a3=1 items=0 ppid=2803 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 02:15:51.268000 audit[2960]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.268000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe70617e0 a2=0 a3=1 items=0 ppid=2803 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.268000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 02:15:51.269000 audit[2961]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.269000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff49d7b10 a2=0 a3=1 items=0 ppid=2803 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 02:15:51.272000 audit[2963]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2963 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.272000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffddd411f0 a2=0 a3=1 items=0 ppid=2803 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 02:15:51.276000 audit[2966]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=2966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.276000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffcd25920 a2=0 a3=1 items=0 ppid=2803 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 02:15:51.277000 audit[2967]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.277000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3de6ce0 a2=0 a3=1 items=0 ppid=2803 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 02:15:51.279000 audit[2969]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.279000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff077b460 a2=0 a3=1 items=0 ppid=2803 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 02:15:51.280000 audit[2970]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=2970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.280000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd81c0bf0 a2=0 a3=1 items=0 ppid=2803 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 02:15:51.283000 audit[2972]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=2972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.283000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe436a8c0 a2=0 a3=1 items=0 ppid=2803 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 02:15:51.287000 audit[2975]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2975 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.287000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffef301aa0 a2=0 a3=1 items=0 ppid=2803 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 02:15:51.291000 audit[2978]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=2978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.291000 audit[2978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff1ca9230 a2=0 a3=1 items=0 ppid=2803 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.291000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 02:15:51.292000 audit[2979]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.292000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffca966ca0 a2=0 a3=1 items=0 ppid=2803 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 02:15:51.294000 audit[2981]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=2981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.294000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc350a130 a2=0 a3=1 items=0 ppid=2803 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:15:51.298000 audit[2984]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.298000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcbd957b0 a2=0 a3=1 items=0 ppid=2803 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 02:15:51.299000 audit[2985]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.299000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1c98440 a2=0 a3=1 items=0 ppid=2803 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 02:15:51.302000 audit[2987]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.302000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff5e7af90 a2=0 a3=1 items=0 ppid=2803 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 02:15:51.303000 audit[2988]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.303000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6c971d0 a2=0 a3=1 items=0 ppid=2803 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 02:15:51.305000 audit[2990]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=2990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.305000 audit[2990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffefc2e770 a2=0 a3=1 items=0 ppid=2803 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:15:51.309000 audit[2993]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 02:15:51.309000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdca53c20 a2=0 a3=1 items=0 ppid=2803 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.309000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 02:15:51.312000 audit[2995]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:15:51.312000 audit[2995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe31afb10 a2=0 a3=1 items=0 ppid=2803 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.312000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:15:51.313000 audit[2995]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 02:15:51.313000 audit[2995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe31afb10 a2=0 a3=1 items=0 ppid=2803 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:51.313000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:15:51.561955 kubelet[2693]: E1216 02:15:51.561723 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:51.563521 kubelet[2693]: E1216 02:15:51.563419 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:51.586871 kubelet[2693]: I1216 02:15:51.586774 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-62dnb" podStartSLOduration=1.586754775 podStartE2EDuration="1.586754775s" podCreationTimestamp="2025-12-16 02:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:15:51.585646988 +0000 UTC m=+8.146797771" watchObservedRunningTime="2025-12-16 02:15:51.586754775 +0000 UTC m=+8.147905478" Dec 16 02:15:52.270652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504448170.mount: Deactivated successfully. Dec 16 02:15:52.518116 kubelet[2693]: E1216 02:15:52.518084 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:52.563374 kubelet[2693]: E1216 02:15:52.562738 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:53.553322 containerd[1550]: time="2025-12-16T02:15:53.553268251Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:53.554634 containerd[1550]: time="2025-12-16T02:15:53.554561983Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 16 02:15:53.555530 containerd[1550]: time="2025-12-16T02:15:53.555492005Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:53.557849 containerd[1550]: time="2025-12-16T02:15:53.557807237Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:15:53.558760 containerd[1550]: time="2025-12-16T02:15:53.558713290Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.468343192s" Dec 16 02:15:53.558760 containerd[1550]: time="2025-12-16T02:15:53.558755467Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 02:15:53.562143 containerd[1550]: time="2025-12-16T02:15:53.561722126Z" level=info msg="CreateContainer within sandbox \"c6aea11a748662175a11aceefb2c6428b8a9d40dc8ae165177c6b2cde483c391\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 02:15:53.566617 kubelet[2693]: E1216 02:15:53.566522 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:53.576906 containerd[1550]: time="2025-12-16T02:15:53.576835299Z" level=info msg="Container e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:15:53.588059 containerd[1550]: time="2025-12-16T02:15:53.588018016Z" level=info msg="CreateContainer within sandbox \"c6aea11a748662175a11aceefb2c6428b8a9d40dc8ae165177c6b2cde483c391\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9\"" Dec 16 02:15:53.588566 containerd[1550]: time="2025-12-16T02:15:53.588539950Z" level=info msg="StartContainer for \"e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9\"" Dec 16 02:15:53.590101 containerd[1550]: time="2025-12-16T02:15:53.589979062Z" level=info msg="connecting to shim e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9" address="unix:///run/containerd/s/c9190aa21e198b1aebe2f3a1f373b069e96587e341c63852bc117b35194fad77" protocol=ttrpc version=3 Dec 16 02:15:53.682826 systemd[1]: Started cri-containerd-e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9.scope - libcontainer container e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9. Dec 16 02:15:53.693000 audit: BPF prog-id=146 op=LOAD Dec 16 02:15:53.693000 audit: BPF prog-id=147 op=LOAD Dec 16 02:15:53.693000 audit[3005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2837 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:53.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343633616333386634386265363130383433633261646536366438 Dec 16 02:15:53.693000 audit: BPF prog-id=147 op=UNLOAD Dec 16 02:15:53.693000 audit[3005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:53.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343633616333386634386265363130383433633261646536366438 Dec 16 02:15:53.693000 audit: BPF prog-id=148 op=LOAD Dec 16 02:15:53.693000 audit[3005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2837 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:53.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343633616333386634386265363130383433633261646536366438 Dec 16 02:15:53.693000 audit: BPF prog-id=149 op=LOAD Dec 16 02:15:53.693000 audit[3005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2837 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:53.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343633616333386634386265363130383433633261646536366438 Dec 16 02:15:53.693000 audit: BPF prog-id=149 op=UNLOAD Dec 16 02:15:53.693000 audit[3005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:53.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343633616333386634386265363130383433633261646536366438 Dec 16 02:15:53.693000 audit: BPF prog-id=148 op=UNLOAD Dec 16 02:15:53.693000 audit[3005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:53.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343633616333386634386265363130383433633261646536366438 Dec 16 02:15:53.693000 audit: BPF prog-id=150 op=LOAD Dec 16 02:15:53.693000 audit[3005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2837 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:53.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539343633616333386634386265363130383433633261646536366438 Dec 16 02:15:53.712842 containerd[1550]: time="2025-12-16T02:15:53.712785863Z" level=info msg="StartContainer for \"e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9\" returns successfully" Dec 16 02:15:54.578571 kubelet[2693]: I1216 02:15:54.578505 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-46sxc" podStartSLOduration=2.10889087 podStartE2EDuration="4.57848732s" podCreationTimestamp="2025-12-16 02:15:50 +0000 UTC" firstStartedPulling="2025-12-16 02:15:51.089878993 +0000 UTC m=+7.651029696" lastFinishedPulling="2025-12-16 02:15:53.559475443 +0000 UTC m=+10.120626146" observedRunningTime="2025-12-16 02:15:54.57787328 +0000 UTC m=+11.139024023" watchObservedRunningTime="2025-12-16 02:15:54.57848732 +0000 UTC m=+11.139638063" Dec 16 02:15:55.704090 systemd[1]: cri-containerd-e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9.scope: Deactivated successfully. Dec 16 02:15:55.708000 audit: BPF prog-id=146 op=UNLOAD Dec 16 02:15:55.708000 audit: BPF prog-id=150 op=UNLOAD Dec 16 02:15:55.731728 containerd[1550]: time="2025-12-16T02:15:55.731674949Z" level=info msg="received container exit event container_id:\"e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9\" id:\"e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9\" pid:3018 exit_status:1 exited_at:{seconds:1765851355 nanos:723763543}" Dec 16 02:15:55.800074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9-rootfs.mount: Deactivated successfully. Dec 16 02:15:56.574073 kubelet[2693]: I1216 02:15:56.573984 2693 scope.go:117] "RemoveContainer" containerID="e9463ac38f48be610843c2ade66d8a1d53a3e531274ec30dceeecc0d56a1f4e9" Dec 16 02:15:56.576005 containerd[1550]: time="2025-12-16T02:15:56.575934799Z" level=info msg="CreateContainer within sandbox \"c6aea11a748662175a11aceefb2c6428b8a9d40dc8ae165177c6b2cde483c391\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 02:15:56.587454 containerd[1550]: time="2025-12-16T02:15:56.587385181Z" level=info msg="Container 52e69e64e8c61dd5a2cf0da3ca3f229378a40d9041f0ea102ca2f065d40b0c4d: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:15:56.603958 containerd[1550]: time="2025-12-16T02:15:56.603889899Z" level=info msg="CreateContainer within sandbox \"c6aea11a748662175a11aceefb2c6428b8a9d40dc8ae165177c6b2cde483c391\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"52e69e64e8c61dd5a2cf0da3ca3f229378a40d9041f0ea102ca2f065d40b0c4d\"" Dec 16 02:15:56.604467 containerd[1550]: time="2025-12-16T02:15:56.604438251Z" level=info msg="StartContainer for \"52e69e64e8c61dd5a2cf0da3ca3f229378a40d9041f0ea102ca2f065d40b0c4d\"" Dec 16 02:15:56.606513 containerd[1550]: time="2025-12-16T02:15:56.606458201Z" level=info msg="connecting to shim 52e69e64e8c61dd5a2cf0da3ca3f229378a40d9041f0ea102ca2f065d40b0c4d" address="unix:///run/containerd/s/c9190aa21e198b1aebe2f3a1f373b069e96587e341c63852bc117b35194fad77" protocol=ttrpc version=3 Dec 16 02:15:56.631869 systemd[1]: Started cri-containerd-52e69e64e8c61dd5a2cf0da3ca3f229378a40d9041f0ea102ca2f065d40b0c4d.scope - libcontainer container 52e69e64e8c61dd5a2cf0da3ca3f229378a40d9041f0ea102ca2f065d40b0c4d. Dec 16 02:15:56.649000 audit: BPF prog-id=151 op=LOAD Dec 16 02:15:56.652196 kernel: kauditd_printk_skb: 226 callbacks suppressed Dec 16 02:15:56.652274 kernel: audit: type=1334 audit(1765851356.649:514): prog-id=151 op=LOAD Dec 16 02:15:56.651000 audit: BPF prog-id=152 op=LOAD Dec 16 02:15:56.653667 kernel: audit: type=1334 audit(1765851356.651:515): prog-id=152 op=LOAD Dec 16 02:15:56.651000 audit[3080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.658086 kernel: audit: type=1300 audit(1765851356.651:515): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.662366 kernel: audit: type=1327 audit(1765851356.651:515): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.662442 kernel: audit: type=1334 audit(1765851356.651:516): prog-id=152 op=UNLOAD Dec 16 02:15:56.651000 audit: BPF prog-id=152 op=UNLOAD Dec 16 02:15:56.651000 audit[3080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.666993 kernel: audit: type=1300 audit(1765851356.651:516): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.671179 kernel: audit: type=1327 audit(1765851356.651:516): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.656000 audit: BPF prog-id=153 op=LOAD Dec 16 02:15:56.672458 kernel: audit: type=1334 audit(1765851356.656:517): prog-id=153 op=LOAD Dec 16 02:15:56.672522 kernel: audit: type=1300 audit(1765851356.656:517): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.656000 audit[3080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.680094 kernel: audit: type=1327 audit(1765851356.656:517): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.656000 audit: BPF prog-id=154 op=LOAD Dec 16 02:15:56.656000 audit[3080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.656000 audit: BPF prog-id=154 op=UNLOAD Dec 16 02:15:56.656000 audit[3080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.656000 audit: BPF prog-id=153 op=UNLOAD Dec 16 02:15:56.656000 audit[3080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.656000 audit: BPF prog-id=155 op=LOAD Dec 16 02:15:56.656000 audit[3080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2837 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:15:56.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532653639653634653863363164643561326366306461336361336632 Dec 16 02:15:56.695770 containerd[1550]: time="2025-12-16T02:15:56.695581946Z" level=info msg="StartContainer for \"52e69e64e8c61dd5a2cf0da3ca3f229378a40d9041f0ea102ca2f065d40b0c4d\" returns successfully" Dec 16 02:15:57.319788 kubelet[2693]: E1216 02:15:57.319624 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:15:58.500905 update_engine[1530]: I20251216 02:15:58.500829 1530 update_attempter.cc:509] Updating boot flags... Dec 16 02:15:59.048852 sudo[1768]: pam_unix(sudo:session): session closed for user root Dec 16 02:15:59.048000 audit[1768]: USER_END pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:59.048000 audit[1768]: CRED_DISP pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 02:15:59.052184 sshd[1767]: Connection closed by 10.0.0.1 port 58488 Dec 16 02:15:59.052723 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Dec 16 02:15:59.053000 audit[1763]: USER_END pid=1763 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:59.053000 audit[1763]: CRED_DISP pid=1763 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:15:59.056728 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Dec 16 02:15:59.056986 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:58488.service: Deactivated successfully. Dec 16 02:15:59.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-10.0.0.1:58488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:15:59.059908 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 02:15:59.060127 systemd[1]: session-8.scope: Consumed 7.496s CPU time, 215.5M memory peak. Dec 16 02:15:59.062285 systemd-logind[1526]: Removed session 8. Dec 16 02:16:01.121000 audit[3159]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:01.121000 audit[3159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff4ff2720 a2=0 a3=1 items=0 ppid=2803 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:01.121000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:01.127000 audit[3159]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:01.127000 audit[3159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff4ff2720 a2=0 a3=1 items=0 ppid=2803 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:01.127000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:01.142000 audit[3161]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:01.142000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe9181d30 a2=0 a3=1 items=0 ppid=2803 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:01.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:01.150000 audit[3161]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:01.150000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe9181d30 a2=0 a3=1 items=0 ppid=2803 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:01.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:05.587000 audit[3164]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:05.591449 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 16 02:16:05.591544 kernel: audit: type=1325 audit(1765851365.587:531): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:05.587000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffd288570 a2=0 a3=1 items=0 ppid=2803 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:05.598511 kernel: audit: type=1300 audit(1765851365.587:531): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffd288570 a2=0 a3=1 items=0 ppid=2803 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:05.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:05.604155 kernel: audit: type=1327 audit(1765851365.587:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:05.602000 audit[3164]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:05.607077 kernel: audit: type=1325 audit(1765851365.602:532): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:05.602000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffd288570 a2=0 a3=1 items=0 ppid=2803 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:05.612444 kernel: audit: type=1300 audit(1765851365.602:532): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffd288570 a2=0 a3=1 items=0 ppid=2803 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:05.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:05.615534 kernel: audit: type=1327 audit(1765851365.602:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:05.633000 audit[3166]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:05.633000 audit[3166]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffea635690 a2=0 a3=1 items=0 ppid=2803 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:05.641634 kernel: audit: type=1325 audit(1765851365.633:533): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:05.641720 kernel: audit: type=1300 audit(1765851365.633:533): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffea635690 a2=0 a3=1 items=0 ppid=2803 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:05.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:05.644480 kernel: audit: type=1327 audit(1765851365.633:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:05.644000 audit[3166]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:05.646996 kernel: audit: type=1325 audit(1765851365.644:534): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:05.644000 audit[3166]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea635690 a2=0 a3=1 items=0 ppid=2803 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:05.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:06.655000 audit[3168]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:06.655000 audit[3168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc42eb3c0 a2=0 a3=1 items=0 ppid=2803 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:06.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:06.664000 audit[3168]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:06.664000 audit[3168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc42eb3c0 a2=0 a3=1 items=0 ppid=2803 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:06.664000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:08.020000 audit[3170]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:08.020000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffee923720 a2=0 a3=1 items=0 ppid=2803 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:08.027000 audit[3170]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:08.027000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffee923720 a2=0 a3=1 items=0 ppid=2803 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:08.057489 systemd[1]: Created slice kubepods-besteffort-pode0927083_96e7_4bb4_ae9c_654f696fc4ce.slice - libcontainer container kubepods-besteffort-pode0927083_96e7_4bb4_ae9c_654f696fc4ce.slice. Dec 16 02:16:08.164395 kubelet[2693]: I1216 02:16:08.164336 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0927083-96e7-4bb4-ae9c-654f696fc4ce-typha-certs\") pod \"calico-typha-7c47dfcd9c-tbrfq\" (UID: \"e0927083-96e7-4bb4-ae9c-654f696fc4ce\") " pod="calico-system/calico-typha-7c47dfcd9c-tbrfq" Dec 16 02:16:08.164395 kubelet[2693]: I1216 02:16:08.164396 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0927083-96e7-4bb4-ae9c-654f696fc4ce-tigera-ca-bundle\") pod \"calico-typha-7c47dfcd9c-tbrfq\" (UID: \"e0927083-96e7-4bb4-ae9c-654f696fc4ce\") " pod="calico-system/calico-typha-7c47dfcd9c-tbrfq" Dec 16 02:16:08.164864 kubelet[2693]: I1216 02:16:08.164452 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthhg\" (UniqueName: \"kubernetes.io/projected/e0927083-96e7-4bb4-ae9c-654f696fc4ce-kube-api-access-zthhg\") pod \"calico-typha-7c47dfcd9c-tbrfq\" (UID: \"e0927083-96e7-4bb4-ae9c-654f696fc4ce\") " pod="calico-system/calico-typha-7c47dfcd9c-tbrfq" Dec 16 02:16:08.253479 systemd[1]: Created slice kubepods-besteffort-pod9b0c7990_1562_432e_a691_bca3895ca70d.slice - libcontainer container kubepods-besteffort-pod9b0c7990_1562_432e_a691_bca3895ca70d.slice. Dec 16 02:16:08.264685 kubelet[2693]: I1216 02:16:08.264620 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-var-lib-calico\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.264685 kubelet[2693]: I1216 02:16:08.264658 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lx5z\" (UniqueName: \"kubernetes.io/projected/9b0c7990-1562-432e-a691-bca3895ca70d-kube-api-access-2lx5z\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.264685 kubelet[2693]: I1216 02:16:08.264692 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-flexvol-driver-host\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.264956 kubelet[2693]: I1216 02:16:08.264713 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0c7990-1562-432e-a691-bca3895ca70d-tigera-ca-bundle\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.264956 kubelet[2693]: I1216 02:16:08.264736 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-xtables-lock\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.264956 kubelet[2693]: I1216 02:16:08.264840 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-cni-bin-dir\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.264956 kubelet[2693]: I1216 02:16:08.264953 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-cni-net-dir\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.265192 kubelet[2693]: I1216 02:16:08.264980 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-policysync\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.265192 kubelet[2693]: I1216 02:16:08.265015 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-var-run-calico\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.265192 kubelet[2693]: I1216 02:16:08.265041 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-cni-log-dir\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.265192 kubelet[2693]: I1216 02:16:08.265066 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b0c7990-1562-432e-a691-bca3895ca70d-lib-modules\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.265192 kubelet[2693]: I1216 02:16:08.265093 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b0c7990-1562-432e-a691-bca3895ca70d-node-certs\") pod \"calico-node-l8tw2\" (UID: \"9b0c7990-1562-432e-a691-bca3895ca70d\") " pod="calico-system/calico-node-l8tw2" Dec 16 02:16:08.361433 kubelet[2693]: E1216 02:16:08.360680 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:08.361513 containerd[1550]: time="2025-12-16T02:16:08.361306067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c47dfcd9c-tbrfq,Uid:e0927083-96e7-4bb4-ae9c-654f696fc4ce,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:08.374611 kubelet[2693]: E1216 02:16:08.373605 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.374611 kubelet[2693]: W1216 02:16:08.373629 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.374611 kubelet[2693]: E1216 02:16:08.373661 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.376131 kubelet[2693]: E1216 02:16:08.376112 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.376131 kubelet[2693]: W1216 02:16:08.376130 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.376214 kubelet[2693]: E1216 02:16:08.376146 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.419645 containerd[1550]: time="2025-12-16T02:16:08.419077079Z" level=info msg="connecting to shim d5a4ca8949e8fb752def8c3e0643e25ba18258e3a8110f23587943579373e81a" address="unix:///run/containerd/s/762bf34dbf70ae008852c2f334f86575b7a55d29326806b5fdff58eca1d1c3a9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:08.421546 kubelet[2693]: E1216 02:16:08.421486 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:08.449791 systemd[1]: Started cri-containerd-d5a4ca8949e8fb752def8c3e0643e25ba18258e3a8110f23587943579373e81a.scope - libcontainer container d5a4ca8949e8fb752def8c3e0643e25ba18258e3a8110f23587943579373e81a. Dec 16 02:16:08.465556 kubelet[2693]: E1216 02:16:08.465518 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.465556 kubelet[2693]: W1216 02:16:08.465546 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.465752 kubelet[2693]: E1216 02:16:08.465567 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.466363 kubelet[2693]: E1216 02:16:08.466346 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.466419 kubelet[2693]: W1216 02:16:08.466362 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.466446 kubelet[2693]: E1216 02:16:08.466420 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.466665 kubelet[2693]: E1216 02:16:08.466631 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.466665 kubelet[2693]: W1216 02:16:08.466643 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.466665 kubelet[2693]: E1216 02:16:08.466654 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.466848 kubelet[2693]: E1216 02:16:08.466835 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.466848 kubelet[2693]: W1216 02:16:08.466846 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.466913 kubelet[2693]: E1216 02:16:08.466855 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.467055 kubelet[2693]: E1216 02:16:08.467041 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.467055 kubelet[2693]: W1216 02:16:08.467053 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.467131 kubelet[2693]: E1216 02:16:08.467061 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.467216 kubelet[2693]: E1216 02:16:08.467198 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.467216 kubelet[2693]: W1216 02:16:08.467215 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.467269 kubelet[2693]: E1216 02:16:08.467225 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.467390 kubelet[2693]: E1216 02:16:08.467361 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.467416 kubelet[2693]: W1216 02:16:08.467390 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.467416 kubelet[2693]: E1216 02:16:08.467400 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.467547 kubelet[2693]: E1216 02:16:08.467525 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.467547 kubelet[2693]: W1216 02:16:08.467536 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.467547 kubelet[2693]: E1216 02:16:08.467543 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.467851 kubelet[2693]: E1216 02:16:08.467835 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.467851 kubelet[2693]: W1216 02:16:08.467850 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.467851 kubelet[2693]: E1216 02:16:08.467861 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.468038 kubelet[2693]: E1216 02:16:08.468026 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.468038 kubelet[2693]: W1216 02:16:08.468037 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.468084 kubelet[2693]: E1216 02:16:08.468046 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.468183 kubelet[2693]: E1216 02:16:08.468172 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.468183 kubelet[2693]: W1216 02:16:08.468181 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.468242 kubelet[2693]: E1216 02:16:08.468189 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.468335 kubelet[2693]: E1216 02:16:08.468324 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.468335 kubelet[2693]: W1216 02:16:08.468334 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.468393 kubelet[2693]: E1216 02:16:08.468342 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.468502 kubelet[2693]: E1216 02:16:08.468490 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.468502 kubelet[2693]: W1216 02:16:08.468500 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.468550 kubelet[2693]: E1216 02:16:08.468508 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.468770 kubelet[2693]: E1216 02:16:08.468742 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.468823 kubelet[2693]: W1216 02:16:08.468780 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.468823 kubelet[2693]: E1216 02:16:08.468793 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.468969 kubelet[2693]: E1216 02:16:08.468956 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.468998 kubelet[2693]: W1216 02:16:08.468968 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.468998 kubelet[2693]: E1216 02:16:08.468984 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.469158 kubelet[2693]: E1216 02:16:08.469128 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.469158 kubelet[2693]: W1216 02:16:08.469154 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.469206 kubelet[2693]: E1216 02:16:08.469164 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.469597 kubelet[2693]: E1216 02:16:08.469565 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.469597 kubelet[2693]: W1216 02:16:08.469595 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.469654 kubelet[2693]: E1216 02:16:08.469616 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.469654 kubelet[2693]: I1216 02:16:08.469639 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff-varrun\") pod \"csi-node-driver-xd2kv\" (UID: \"fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff\") " pod="calico-system/csi-node-driver-xd2kv" Dec 16 02:16:08.469878 kubelet[2693]: E1216 02:16:08.469862 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.469878 kubelet[2693]: W1216 02:16:08.469877 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.469940 kubelet[2693]: E1216 02:16:08.469900 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.469940 kubelet[2693]: I1216 02:16:08.469919 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff-socket-dir\") pod \"csi-node-driver-xd2kv\" (UID: \"fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff\") " pod="calico-system/csi-node-driver-xd2kv" Dec 16 02:16:08.470266 kubelet[2693]: E1216 02:16:08.470250 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.470266 kubelet[2693]: W1216 02:16:08.470265 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.470323 kubelet[2693]: E1216 02:16:08.470281 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.470323 kubelet[2693]: I1216 02:16:08.470297 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff-registration-dir\") pod \"csi-node-driver-xd2kv\" (UID: \"fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff\") " pod="calico-system/csi-node-driver-xd2kv" Dec 16 02:16:08.470483 kubelet[2693]: E1216 02:16:08.470470 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.470483 kubelet[2693]: W1216 02:16:08.470482 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.470528 kubelet[2693]: E1216 02:16:08.470497 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.470528 kubelet[2693]: I1216 02:16:08.470512 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff-kubelet-dir\") pod \"csi-node-driver-xd2kv\" (UID: \"fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff\") " pod="calico-system/csi-node-driver-xd2kv" Dec 16 02:16:08.469000 audit: BPF prog-id=156 op=LOAD Dec 16 02:16:08.471846 kubelet[2693]: E1216 02:16:08.471785 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.471846 kubelet[2693]: W1216 02:16:08.471798 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.471846 kubelet[2693]: E1216 02:16:08.471816 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.470000 audit: BPF prog-id=157 op=LOAD Dec 16 02:16:08.470000 audit[3206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3188 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435613463613839343965386662373532646566386333653036343365 Dec 16 02:16:08.470000 audit: BPF prog-id=157 op=UNLOAD Dec 16 02:16:08.470000 audit[3206]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3188 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435613463613839343965386662373532646566386333653036343365 Dec 16 02:16:08.472329 kubelet[2693]: E1216 02:16:08.472232 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.472329 kubelet[2693]: W1216 02:16:08.472244 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.472329 kubelet[2693]: E1216 02:16:08.472274 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.472435 kubelet[2693]: E1216 02:16:08.472412 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.472435 kubelet[2693]: W1216 02:16:08.472425 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.472509 kubelet[2693]: E1216 02:16:08.472498 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.472698 kubelet[2693]: E1216 02:16:08.472681 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.472727 kubelet[2693]: W1216 02:16:08.472698 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.472727 kubelet[2693]: E1216 02:16:08.472722 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.472952 kubelet[2693]: E1216 02:16:08.472933 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.472952 kubelet[2693]: W1216 02:16:08.472950 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.473026 kubelet[2693]: E1216 02:16:08.472977 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.471000 audit: BPF prog-id=158 op=LOAD Dec 16 02:16:08.471000 audit[3206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3188 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435613463613839343965386662373532646566386333653036343365 Dec 16 02:16:08.471000 audit: BPF prog-id=159 op=LOAD Dec 16 02:16:08.471000 audit[3206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3188 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435613463613839343965386662373532646566386333653036343365 Dec 16 02:16:08.471000 audit: BPF prog-id=159 op=UNLOAD Dec 16 02:16:08.471000 audit[3206]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3188 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435613463613839343965386662373532646566386333653036343365 Dec 16 02:16:08.471000 audit: BPF prog-id=158 op=UNLOAD Dec 16 02:16:08.471000 audit[3206]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3188 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435613463613839343965386662373532646566386333653036343365 Dec 16 02:16:08.471000 audit: BPF prog-id=160 op=LOAD Dec 16 02:16:08.471000 audit[3206]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3188 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435613463613839343965386662373532646566386333653036343365 Dec 16 02:16:08.473952 kubelet[2693]: E1216 02:16:08.473904 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.473952 kubelet[2693]: W1216 02:16:08.473926 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.474038 kubelet[2693]: E1216 02:16:08.473957 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.474143 kubelet[2693]: E1216 02:16:08.474128 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.474143 kubelet[2693]: W1216 02:16:08.474141 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.474197 kubelet[2693]: E1216 02:16:08.474181 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.474331 kubelet[2693]: E1216 02:16:08.474319 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.474331 kubelet[2693]: W1216 02:16:08.474330 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.474385 kubelet[2693]: E1216 02:16:08.474347 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.474537 kubelet[2693]: E1216 02:16:08.474518 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.474537 kubelet[2693]: W1216 02:16:08.474534 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.474640 kubelet[2693]: E1216 02:16:08.474550 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.474799 kubelet[2693]: E1216 02:16:08.474784 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.474799 kubelet[2693]: W1216 02:16:08.474797 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.474963 kubelet[2693]: E1216 02:16:08.474814 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.475034 kubelet[2693]: E1216 02:16:08.475004 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.475034 kubelet[2693]: W1216 02:16:08.475017 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.475034 kubelet[2693]: E1216 02:16:08.475025 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.475186 kubelet[2693]: E1216 02:16:08.475171 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.475186 kubelet[2693]: W1216 02:16:08.475183 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.475253 kubelet[2693]: E1216 02:16:08.475191 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.507843 containerd[1550]: time="2025-12-16T02:16:08.507783209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c47dfcd9c-tbrfq,Uid:e0927083-96e7-4bb4-ae9c-654f696fc4ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5a4ca8949e8fb752def8c3e0643e25ba18258e3a8110f23587943579373e81a\"" Dec 16 02:16:08.514960 kubelet[2693]: E1216 02:16:08.514734 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:08.520495 containerd[1550]: time="2025-12-16T02:16:08.520116080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 02:16:08.555944 kubelet[2693]: E1216 02:16:08.555891 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:08.556599 containerd[1550]: time="2025-12-16T02:16:08.556543017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l8tw2,Uid:9b0c7990-1562-432e-a691-bca3895ca70d,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:08.571395 kubelet[2693]: E1216 02:16:08.571372 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.571395 kubelet[2693]: W1216 02:16:08.571392 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.571530 kubelet[2693]: E1216 02:16:08.571421 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.571657 kubelet[2693]: E1216 02:16:08.571639 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.571691 kubelet[2693]: W1216 02:16:08.571657 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.571691 kubelet[2693]: E1216 02:16:08.571673 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.571890 kubelet[2693]: E1216 02:16:08.571871 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.571941 kubelet[2693]: W1216 02:16:08.571888 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.571941 kubelet[2693]: E1216 02:16:08.571906 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.572105 kubelet[2693]: E1216 02:16:08.572088 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.572105 kubelet[2693]: W1216 02:16:08.572103 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.572169 kubelet[2693]: E1216 02:16:08.572118 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.572306 kubelet[2693]: E1216 02:16:08.572294 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.572346 kubelet[2693]: W1216 02:16:08.572308 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.572346 kubelet[2693]: E1216 02:16:08.572323 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.572391 kubelet[2693]: I1216 02:16:08.572339 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n67sh\" (UniqueName: \"kubernetes.io/projected/fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff-kube-api-access-n67sh\") pod \"csi-node-driver-xd2kv\" (UID: \"fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff\") " pod="calico-system/csi-node-driver-xd2kv" Dec 16 02:16:08.572542 kubelet[2693]: E1216 02:16:08.572528 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.572601 kubelet[2693]: W1216 02:16:08.572548 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.572601 kubelet[2693]: E1216 02:16:08.572563 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.572768 kubelet[2693]: E1216 02:16:08.572750 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.572768 kubelet[2693]: W1216 02:16:08.572766 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.572886 kubelet[2693]: E1216 02:16:08.572802 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.572920 kubelet[2693]: E1216 02:16:08.572890 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.572920 kubelet[2693]: W1216 02:16:08.572898 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.572920 kubelet[2693]: E1216 02:16:08.572946 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.572920 kubelet[2693]: E1216 02:16:08.573036 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.572920 kubelet[2693]: W1216 02:16:08.573043 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.573181 kubelet[2693]: E1216 02:16:08.573090 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.573294 kubelet[2693]: E1216 02:16:08.573202 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.573294 kubelet[2693]: W1216 02:16:08.573212 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.573294 kubelet[2693]: E1216 02:16:08.573233 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.573385 kubelet[2693]: E1216 02:16:08.573348 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.573385 kubelet[2693]: W1216 02:16:08.573355 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.573385 kubelet[2693]: E1216 02:16:08.573367 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.574626 kubelet[2693]: E1216 02:16:08.573531 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.574626 kubelet[2693]: W1216 02:16:08.573540 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.574626 kubelet[2693]: E1216 02:16:08.573548 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.574626 kubelet[2693]: E1216 02:16:08.573696 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.574626 kubelet[2693]: W1216 02:16:08.573702 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.574626 kubelet[2693]: E1216 02:16:08.573711 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.574626 kubelet[2693]: E1216 02:16:08.574057 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.574626 kubelet[2693]: W1216 02:16:08.574071 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.574626 kubelet[2693]: E1216 02:16:08.574085 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.574626 kubelet[2693]: E1216 02:16:08.574245 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.574869 kubelet[2693]: W1216 02:16:08.574253 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.574869 kubelet[2693]: E1216 02:16:08.574372 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.574869 kubelet[2693]: E1216 02:16:08.574437 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.574869 kubelet[2693]: W1216 02:16:08.574443 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.574869 kubelet[2693]: E1216 02:16:08.574472 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.574869 kubelet[2693]: E1216 02:16:08.574592 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.574869 kubelet[2693]: W1216 02:16:08.574599 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.574869 kubelet[2693]: E1216 02:16:08.574674 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.574869 kubelet[2693]: E1216 02:16:08.574798 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.574869 kubelet[2693]: W1216 02:16:08.574805 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.575076 kubelet[2693]: E1216 02:16:08.574813 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.575076 kubelet[2693]: E1216 02:16:08.575001 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.575076 kubelet[2693]: W1216 02:16:08.575010 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.575076 kubelet[2693]: E1216 02:16:08.575020 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.575188 kubelet[2693]: E1216 02:16:08.575167 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.575188 kubelet[2693]: W1216 02:16:08.575179 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.575249 kubelet[2693]: E1216 02:16:08.575194 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.575417 kubelet[2693]: E1216 02:16:08.575403 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.575417 kubelet[2693]: W1216 02:16:08.575415 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.575503 kubelet[2693]: E1216 02:16:08.575427 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.575573 kubelet[2693]: E1216 02:16:08.575561 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.575573 kubelet[2693]: W1216 02:16:08.575570 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.575653 kubelet[2693]: E1216 02:16:08.575579 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.575863 kubelet[2693]: E1216 02:16:08.575849 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.575863 kubelet[2693]: W1216 02:16:08.575862 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.575937 kubelet[2693]: E1216 02:16:08.575871 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.581234 containerd[1550]: time="2025-12-16T02:16:08.581181752Z" level=info msg="connecting to shim 0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3" address="unix:///run/containerd/s/d4adc010cbaafae89ea314472705035513c178a432c87a8a313f0b8b23d79287" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:08.616822 systemd[1]: Started cri-containerd-0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3.scope - libcontainer container 0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3. Dec 16 02:16:08.626000 audit: BPF prog-id=161 op=LOAD Dec 16 02:16:08.626000 audit: BPF prog-id=162 op=LOAD Dec 16 02:16:08.626000 audit[3311]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3300 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613534623939623866326162646336653062343464323537383564 Dec 16 02:16:08.626000 audit: BPF prog-id=162 op=UNLOAD Dec 16 02:16:08.626000 audit[3311]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613534623939623866326162646336653062343464323537383564 Dec 16 02:16:08.627000 audit: BPF prog-id=163 op=LOAD Dec 16 02:16:08.627000 audit[3311]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3300 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613534623939623866326162646336653062343464323537383564 Dec 16 02:16:08.627000 audit: BPF prog-id=164 op=LOAD Dec 16 02:16:08.627000 audit[3311]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3300 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613534623939623866326162646336653062343464323537383564 Dec 16 02:16:08.627000 audit: BPF prog-id=164 op=UNLOAD Dec 16 02:16:08.627000 audit[3311]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613534623939623866326162646336653062343464323537383564 Dec 16 02:16:08.627000 audit: BPF prog-id=163 op=UNLOAD Dec 16 02:16:08.627000 audit[3311]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613534623939623866326162646336653062343464323537383564 Dec 16 02:16:08.627000 audit: BPF prog-id=165 op=LOAD Dec 16 02:16:08.627000 audit[3311]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3300 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:08.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065613534623939623866326162646336653062343464323537383564 Dec 16 02:16:08.641641 containerd[1550]: time="2025-12-16T02:16:08.641580572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l8tw2,Uid:9b0c7990-1562-432e-a691-bca3895ca70d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3\"" Dec 16 02:16:08.642414 kubelet[2693]: E1216 02:16:08.642392 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:08.673203 kubelet[2693]: E1216 02:16:08.673170 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.673203 kubelet[2693]: W1216 02:16:08.673194 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.673323 kubelet[2693]: E1216 02:16:08.673214 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.673443 kubelet[2693]: E1216 02:16:08.673426 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.673443 kubelet[2693]: W1216 02:16:08.673438 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.673504 kubelet[2693]: E1216 02:16:08.673448 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.673668 kubelet[2693]: E1216 02:16:08.673655 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.673668 kubelet[2693]: W1216 02:16:08.673667 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.673735 kubelet[2693]: E1216 02:16:08.673676 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.673862 kubelet[2693]: E1216 02:16:08.673849 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.673862 kubelet[2693]: W1216 02:16:08.673861 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.673921 kubelet[2693]: E1216 02:16:08.673871 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.674086 kubelet[2693]: E1216 02:16:08.674074 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.674086 kubelet[2693]: W1216 02:16:08.674085 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.674153 kubelet[2693]: E1216 02:16:08.674094 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:08.685308 kubelet[2693]: E1216 02:16:08.685256 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:08.685308 kubelet[2693]: W1216 02:16:08.685279 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:08.685308 kubelet[2693]: E1216 02:16:08.685296 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:09.044000 audit[3345]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:09.044000 audit[3345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd59f4570 a2=0 a3=1 items=0 ppid=2803 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:09.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:09.051000 audit[3345]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:09.051000 audit[3345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd59f4570 a2=0 a3=1 items=0 ppid=2803 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:09.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:09.566204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3262362290.mount: Deactivated successfully. Dec 16 02:16:10.509534 containerd[1550]: time="2025-12-16T02:16:10.509468173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:10.510120 containerd[1550]: time="2025-12-16T02:16:10.510065724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 16 02:16:10.510977 containerd[1550]: time="2025-12-16T02:16:10.510948127Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:10.512966 containerd[1550]: time="2025-12-16T02:16:10.512920251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:10.513501 containerd[1550]: time="2025-12-16T02:16:10.513473393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.993318666s" Dec 16 02:16:10.513536 containerd[1550]: time="2025-12-16T02:16:10.513506319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 02:16:10.516400 containerd[1550]: time="2025-12-16T02:16:10.516177132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 02:16:10.520483 kubelet[2693]: E1216 02:16:10.520445 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:10.526837 containerd[1550]: time="2025-12-16T02:16:10.526797214Z" level=info msg="CreateContainer within sandbox \"d5a4ca8949e8fb752def8c3e0643e25ba18258e3a8110f23587943579373e81a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 02:16:10.533499 containerd[1550]: time="2025-12-16T02:16:10.533463645Z" level=info msg="Container 9f423a87e00088094fbed00d662db3616e0c9083f9a354742f61414890d7bb08: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:16:10.545355 containerd[1550]: time="2025-12-16T02:16:10.545303272Z" level=info msg="CreateContainer within sandbox \"d5a4ca8949e8fb752def8c3e0643e25ba18258e3a8110f23587943579373e81a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f423a87e00088094fbed00d662db3616e0c9083f9a354742f61414890d7bb08\"" Dec 16 02:16:10.545855 containerd[1550]: time="2025-12-16T02:16:10.545798043Z" level=info msg="StartContainer for \"9f423a87e00088094fbed00d662db3616e0c9083f9a354742f61414890d7bb08\"" Dec 16 02:16:10.547226 containerd[1550]: time="2025-12-16T02:16:10.547186700Z" level=info msg="connecting to shim 9f423a87e00088094fbed00d662db3616e0c9083f9a354742f61414890d7bb08" address="unix:///run/containerd/s/762bf34dbf70ae008852c2f334f86575b7a55d29326806b5fdff58eca1d1c3a9" protocol=ttrpc version=3 Dec 16 02:16:10.573794 systemd[1]: Started cri-containerd-9f423a87e00088094fbed00d662db3616e0c9083f9a354742f61414890d7bb08.scope - libcontainer container 9f423a87e00088094fbed00d662db3616e0c9083f9a354742f61414890d7bb08. Dec 16 02:16:10.584000 audit: BPF prog-id=166 op=LOAD Dec 16 02:16:10.585000 audit: BPF prog-id=167 op=LOAD Dec 16 02:16:10.585000 audit[3356]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3188 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:10.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343233613837653030303838303934666265643030643636326462 Dec 16 02:16:10.585000 audit: BPF prog-id=167 op=UNLOAD Dec 16 02:16:10.585000 audit[3356]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3188 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:10.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343233613837653030303838303934666265643030643636326462 Dec 16 02:16:10.585000 audit: BPF prog-id=168 op=LOAD Dec 16 02:16:10.585000 audit[3356]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3188 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:10.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343233613837653030303838303934666265643030643636326462 Dec 16 02:16:10.585000 audit: BPF prog-id=169 op=LOAD Dec 16 02:16:10.585000 audit[3356]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3188 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:10.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343233613837653030303838303934666265643030643636326462 Dec 16 02:16:10.585000 audit: BPF prog-id=169 op=UNLOAD Dec 16 02:16:10.585000 audit[3356]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3188 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:10.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343233613837653030303838303934666265643030643636326462 Dec 16 02:16:10.585000 audit: BPF prog-id=168 op=UNLOAD Dec 16 02:16:10.585000 audit[3356]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3188 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:10.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343233613837653030303838303934666265643030643636326462 Dec 16 02:16:10.585000 audit: BPF prog-id=170 op=LOAD Dec 16 02:16:10.585000 audit[3356]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3188 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:10.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966343233613837653030303838303934666265643030643636326462 Dec 16 02:16:10.624559 containerd[1550]: time="2025-12-16T02:16:10.624523183Z" level=info msg="StartContainer for \"9f423a87e00088094fbed00d662db3616e0c9083f9a354742f61414890d7bb08\" returns successfully" Dec 16 02:16:11.628631 kubelet[2693]: E1216 02:16:11.628241 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:11.674713 kubelet[2693]: I1216 02:16:11.674206 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c47dfcd9c-tbrfq" podStartSLOduration=1.674949105 podStartE2EDuration="3.67418776s" podCreationTimestamp="2025-12-16 02:16:08 +0000 UTC" firstStartedPulling="2025-12-16 02:16:08.51657357 +0000 UTC m=+25.077724313" lastFinishedPulling="2025-12-16 02:16:10.515812225 +0000 UTC m=+27.076962968" observedRunningTime="2025-12-16 02:16:11.673966481 +0000 UTC m=+28.235117224" watchObservedRunningTime="2025-12-16 02:16:11.67418776 +0000 UTC m=+28.235338503" Dec 16 02:16:11.694687 kubelet[2693]: E1216 02:16:11.694652 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.694687 kubelet[2693]: W1216 02:16:11.694680 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.694687 kubelet[2693]: E1216 02:16:11.694702 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.694987 kubelet[2693]: E1216 02:16:11.694886 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.694987 kubelet[2693]: W1216 02:16:11.694894 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.694987 kubelet[2693]: E1216 02:16:11.694937 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.695100 kubelet[2693]: E1216 02:16:11.695077 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.695100 kubelet[2693]: W1216 02:16:11.695084 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.695100 kubelet[2693]: E1216 02:16:11.695092 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.695630 kubelet[2693]: E1216 02:16:11.695239 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.695630 kubelet[2693]: W1216 02:16:11.695252 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.695630 kubelet[2693]: E1216 02:16:11.695260 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.695630 kubelet[2693]: E1216 02:16:11.695417 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.695630 kubelet[2693]: W1216 02:16:11.695425 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.695630 kubelet[2693]: E1216 02:16:11.695433 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.695630 kubelet[2693]: E1216 02:16:11.695604 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.695630 kubelet[2693]: W1216 02:16:11.695612 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.695630 kubelet[2693]: E1216 02:16:11.695621 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.696508 kubelet[2693]: E1216 02:16:11.695778 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.696508 kubelet[2693]: W1216 02:16:11.695786 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.696508 kubelet[2693]: E1216 02:16:11.695794 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.696508 kubelet[2693]: E1216 02:16:11.695930 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.696508 kubelet[2693]: W1216 02:16:11.695937 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.696508 kubelet[2693]: E1216 02:16:11.695944 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.696508 kubelet[2693]: E1216 02:16:11.696106 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.696508 kubelet[2693]: W1216 02:16:11.696115 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.696508 kubelet[2693]: E1216 02:16:11.696122 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.696508 kubelet[2693]: E1216 02:16:11.696249 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.697470 kubelet[2693]: W1216 02:16:11.696256 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.697470 kubelet[2693]: E1216 02:16:11.696263 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.697470 kubelet[2693]: E1216 02:16:11.696384 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.697470 kubelet[2693]: W1216 02:16:11.696392 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.697470 kubelet[2693]: E1216 02:16:11.696399 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.697470 kubelet[2693]: E1216 02:16:11.696536 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.697470 kubelet[2693]: W1216 02:16:11.696549 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.697470 kubelet[2693]: E1216 02:16:11.696557 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.697470 kubelet[2693]: E1216 02:16:11.696761 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.697470 kubelet[2693]: W1216 02:16:11.696769 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698225 kubelet[2693]: E1216 02:16:11.696777 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698225 kubelet[2693]: E1216 02:16:11.696899 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698225 kubelet[2693]: W1216 02:16:11.696906 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698225 kubelet[2693]: E1216 02:16:11.696913 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698225 kubelet[2693]: E1216 02:16:11.697044 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698225 kubelet[2693]: W1216 02:16:11.697051 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698225 kubelet[2693]: E1216 02:16:11.697059 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698225 kubelet[2693]: E1216 02:16:11.697277 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698225 kubelet[2693]: W1216 02:16:11.697286 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698225 kubelet[2693]: E1216 02:16:11.697294 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698420 kubelet[2693]: E1216 02:16:11.697456 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698420 kubelet[2693]: W1216 02:16:11.697463 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698420 kubelet[2693]: E1216 02:16:11.697479 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698420 kubelet[2693]: E1216 02:16:11.697650 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698420 kubelet[2693]: W1216 02:16:11.697658 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698420 kubelet[2693]: E1216 02:16:11.697671 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698420 kubelet[2693]: E1216 02:16:11.697869 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698420 kubelet[2693]: W1216 02:16:11.697877 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698420 kubelet[2693]: E1216 02:16:11.697893 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698420 kubelet[2693]: E1216 02:16:11.698049 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698645 kubelet[2693]: W1216 02:16:11.698056 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698645 kubelet[2693]: E1216 02:16:11.698067 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698645 kubelet[2693]: E1216 02:16:11.698187 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698645 kubelet[2693]: W1216 02:16:11.698194 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698645 kubelet[2693]: E1216 02:16:11.698202 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.698645 kubelet[2693]: E1216 02:16:11.698344 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.698645 kubelet[2693]: W1216 02:16:11.698351 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.698645 kubelet[2693]: E1216 02:16:11.698366 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.699805 kubelet[2693]: E1216 02:16:11.699600 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.699805 kubelet[2693]: W1216 02:16:11.699672 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.699805 kubelet[2693]: E1216 02:16:11.699696 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.700131 kubelet[2693]: E1216 02:16:11.700032 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.700131 kubelet[2693]: W1216 02:16:11.700046 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.700131 kubelet[2693]: E1216 02:16:11.700110 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.700331 kubelet[2693]: E1216 02:16:11.700319 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.700454 kubelet[2693]: W1216 02:16:11.700396 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.700618 kubelet[2693]: E1216 02:16:11.700530 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.700886 kubelet[2693]: E1216 02:16:11.700755 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.700886 kubelet[2693]: W1216 02:16:11.700794 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.700886 kubelet[2693]: E1216 02:16:11.700830 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.701048 kubelet[2693]: E1216 02:16:11.701036 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.701099 kubelet[2693]: W1216 02:16:11.701088 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.701213 kubelet[2693]: E1216 02:16:11.701201 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.701560 kubelet[2693]: E1216 02:16:11.701519 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.701560 kubelet[2693]: W1216 02:16:11.701538 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.701560 kubelet[2693]: E1216 02:16:11.701559 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.702005 kubelet[2693]: E1216 02:16:11.701989 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.702005 kubelet[2693]: W1216 02:16:11.702004 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.702212 kubelet[2693]: E1216 02:16:11.702020 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.702380 kubelet[2693]: E1216 02:16:11.702364 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.702612 kubelet[2693]: W1216 02:16:11.702437 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.702612 kubelet[2693]: E1216 02:16:11.702460 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.702984 kubelet[2693]: E1216 02:16:11.702965 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.702984 kubelet[2693]: W1216 02:16:11.702982 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.703058 kubelet[2693]: E1216 02:16:11.702999 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.703272 kubelet[2693]: E1216 02:16:11.703253 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.703429 kubelet[2693]: W1216 02:16:11.703348 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.703429 kubelet[2693]: E1216 02:16:11.703403 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.703866 kubelet[2693]: E1216 02:16:11.703848 2693 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 02:16:11.703866 kubelet[2693]: W1216 02:16:11.703865 2693 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 02:16:11.703935 kubelet[2693]: E1216 02:16:11.703879 2693 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 02:16:11.741207 containerd[1550]: time="2025-12-16T02:16:11.741149693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:11.744061 containerd[1550]: time="2025-12-16T02:16:11.744011881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Dec 16 02:16:11.744927 containerd[1550]: time="2025-12-16T02:16:11.744902719Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:11.750551 containerd[1550]: time="2025-12-16T02:16:11.750176696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:11.750952 containerd[1550]: time="2025-12-16T02:16:11.750925789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.234711929s" Dec 16 02:16:11.751022 containerd[1550]: time="2025-12-16T02:16:11.751008644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 02:16:11.754108 containerd[1550]: time="2025-12-16T02:16:11.754083790Z" level=info msg="CreateContainer within sandbox \"0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 02:16:11.761848 containerd[1550]: time="2025-12-16T02:16:11.761797160Z" level=info msg="Container 4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:16:11.770595 containerd[1550]: time="2025-12-16T02:16:11.770537392Z" level=info msg="CreateContainer within sandbox \"0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096\"" Dec 16 02:16:11.771055 containerd[1550]: time="2025-12-16T02:16:11.771019318Z" level=info msg="StartContainer for \"4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096\"" Dec 16 02:16:11.772366 containerd[1550]: time="2025-12-16T02:16:11.772342593Z" level=info msg="connecting to shim 4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096" address="unix:///run/containerd/s/d4adc010cbaafae89ea314472705035513c178a432c87a8a313f0b8b23d79287" protocol=ttrpc version=3 Dec 16 02:16:11.797799 systemd[1]: Started cri-containerd-4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096.scope - libcontainer container 4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096. Dec 16 02:16:11.862000 audit: BPF prog-id=171 op=LOAD Dec 16 02:16:11.864037 kernel: kauditd_printk_skb: 86 callbacks suppressed Dec 16 02:16:11.864090 kernel: audit: type=1334 audit(1765851371.862:565): prog-id=171 op=LOAD Dec 16 02:16:11.862000 audit[3434]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3300 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:11.868096 kernel: audit: type=1300 audit(1765851371.862:565): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3300 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:11.868145 kernel: audit: type=1327 audit(1765851371.862:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343130303233353865396563356239646534306562306334343333 Dec 16 02:16:11.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343130303233353865396563356239646534306562306334343333 Dec 16 02:16:11.862000 audit: BPF prog-id=172 op=LOAD Dec 16 02:16:11.872027 kernel: audit: type=1334 audit(1765851371.862:566): prog-id=172 op=LOAD Dec 16 02:16:11.872101 kernel: audit: type=1300 audit(1765851371.862:566): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3300 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:11.862000 audit[3434]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3300 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:11.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343130303233353865396563356239646534306562306334343333 Dec 16 02:16:11.878537 kernel: audit: type=1327 audit(1765851371.862:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343130303233353865396563356239646534306562306334343333 Dec 16 02:16:11.863000 audit: BPF prog-id=172 op=UNLOAD Dec 16 02:16:11.879806 kernel: audit: type=1334 audit(1765851371.863:567): prog-id=172 op=UNLOAD Dec 16 02:16:11.863000 audit[3434]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:11.883341 kernel: audit: type=1300 audit(1765851371.863:567): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:11.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343130303233353865396563356239646534306562306334343333 Dec 16 02:16:11.887266 kernel: audit: type=1327 audit(1765851371.863:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343130303233353865396563356239646534306562306334343333 Dec 16 02:16:11.888629 kernel: audit: type=1334 audit(1765851371.863:568): prog-id=171 op=UNLOAD Dec 16 02:16:11.863000 audit: BPF prog-id=171 op=UNLOAD Dec 16 02:16:11.863000 audit[3434]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:11.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343130303233353865396563356239646534306562306334343333 Dec 16 02:16:11.863000 audit: BPF prog-id=173 op=LOAD Dec 16 02:16:11.863000 audit[3434]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3300 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:11.863000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464343130303233353865396563356239646534306562306334343333 Dec 16 02:16:11.895795 containerd[1550]: time="2025-12-16T02:16:11.895724945Z" level=info msg="StartContainer for \"4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096\" returns successfully" Dec 16 02:16:11.908702 systemd[1]: cri-containerd-4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096.scope: Deactivated successfully. Dec 16 02:16:11.910173 containerd[1550]: time="2025-12-16T02:16:11.910095017Z" level=info msg="received container exit event container_id:\"4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096\" id:\"4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096\" pid:3447 exited_at:{seconds:1765851371 nanos:909447262}" Dec 16 02:16:11.911000 audit: BPF prog-id=173 op=UNLOAD Dec 16 02:16:11.929644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d41002358e9ec5b9de40eb0c443367d8123fff7174066a372198d932ac34096-rootfs.mount: Deactivated successfully. Dec 16 02:16:12.520655 kubelet[2693]: E1216 02:16:12.520611 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:12.633614 kubelet[2693]: E1216 02:16:12.633565 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:12.635557 containerd[1550]: time="2025-12-16T02:16:12.635187579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 02:16:13.186170 kubelet[2693]: I1216 02:16:13.186132 2693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 02:16:13.186480 kubelet[2693]: E1216 02:16:13.186462 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:13.248000 audit[3489]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:13.248000 audit[3489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff2ee2630 a2=0 a3=1 items=0 ppid=2803 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:13.248000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:13.255000 audit[3489]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:13.255000 audit[3489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff2ee2630 a2=0 a3=1 items=0 ppid=2803 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:13.255000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:13.276000 audit[3491]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:13.276000 audit[3491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff47cb840 a2=0 a3=1 items=0 ppid=2803 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:13.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:13.285000 audit[3491]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_unregister_chain pid=3491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:13.285000 audit[3491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2956 a0=3 a1=fffff47cb840 a2=0 a3=1 items=0 ppid=2803 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:13.285000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:13.634471 kubelet[2693]: E1216 02:16:13.634444 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:14.303000 audit[3493]: NETFILTER_CFG table=filter:123 family=2 entries=21 op=nft_register_rule pid=3493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:14.303000 audit[3493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffb7bd380 a2=0 a3=1 items=0 ppid=2803 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:14.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:14.312000 audit[3493]: NETFILTER_CFG table=nat:124 family=2 entries=19 op=nft_register_chain pid=3493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:14.312000 audit[3493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffb7bd380 a2=0 a3=1 items=0 ppid=2803 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:14.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:14.521186 kubelet[2693]: E1216 02:16:14.521133 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:15.034629 containerd[1550]: time="2025-12-16T02:16:15.034549709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:15.035820 containerd[1550]: time="2025-12-16T02:16:15.035627074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 16 02:16:15.036627 containerd[1550]: time="2025-12-16T02:16:15.036580661Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:15.038511 containerd[1550]: time="2025-12-16T02:16:15.038485873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:15.039150 containerd[1550]: time="2025-12-16T02:16:15.039119530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.403889304s" Dec 16 02:16:15.039150 containerd[1550]: time="2025-12-16T02:16:15.039149495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 02:16:15.041636 containerd[1550]: time="2025-12-16T02:16:15.041607392Z" level=info msg="CreateContainer within sandbox \"0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 02:16:15.051833 containerd[1550]: time="2025-12-16T02:16:15.049803289Z" level=info msg="Container 6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:16:15.079180 containerd[1550]: time="2025-12-16T02:16:15.079141389Z" level=info msg="CreateContainer within sandbox \"0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a\"" Dec 16 02:16:15.079768 containerd[1550]: time="2025-12-16T02:16:15.079731880Z" level=info msg="StartContainer for \"6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a\"" Dec 16 02:16:15.082783 containerd[1550]: time="2025-12-16T02:16:15.082755984Z" level=info msg="connecting to shim 6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a" address="unix:///run/containerd/s/d4adc010cbaafae89ea314472705035513c178a432c87a8a313f0b8b23d79287" protocol=ttrpc version=3 Dec 16 02:16:15.105792 systemd[1]: Started cri-containerd-6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a.scope - libcontainer container 6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a. Dec 16 02:16:15.167000 audit: BPF prog-id=174 op=LOAD Dec 16 02:16:15.167000 audit[3502]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3300 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:15.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662626139386338353765356366333463663932363861366261353732 Dec 16 02:16:15.167000 audit: BPF prog-id=175 op=LOAD Dec 16 02:16:15.167000 audit[3502]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3300 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:15.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662626139386338353765356366333463663932363861366261353732 Dec 16 02:16:15.167000 audit: BPF prog-id=175 op=UNLOAD Dec 16 02:16:15.167000 audit[3502]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:15.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662626139386338353765356366333463663932363861366261353732 Dec 16 02:16:15.167000 audit: BPF prog-id=174 op=UNLOAD Dec 16 02:16:15.167000 audit[3502]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:15.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662626139386338353765356366333463663932363861366261353732 Dec 16 02:16:15.167000 audit: BPF prog-id=176 op=LOAD Dec 16 02:16:15.167000 audit[3502]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3300 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:15.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662626139386338353765356366333463663932363861366261353732 Dec 16 02:16:15.191876 containerd[1550]: time="2025-12-16T02:16:15.191826235Z" level=info msg="StartContainer for \"6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a\" returns successfully" Dec 16 02:16:15.649323 kubelet[2693]: E1216 02:16:15.649257 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:15.738790 systemd[1]: cri-containerd-6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a.scope: Deactivated successfully. Dec 16 02:16:15.739205 systemd[1]: cri-containerd-6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a.scope: Consumed 466ms CPU time, 174.3M memory peak, 2.5M read from disk, 165.9M written to disk. Dec 16 02:16:15.742245 containerd[1550]: time="2025-12-16T02:16:15.741981067Z" level=info msg="received container exit event container_id:\"6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a\" id:\"6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a\" pid:3514 exited_at:{seconds:1765851375 nanos:739977359}" Dec 16 02:16:15.743000 audit: BPF prog-id=176 op=UNLOAD Dec 16 02:16:15.765084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bba98c857e5cf34cf9268a6ba572f66506d432b1a3e17a3c0d6006f32e14d8a-rootfs.mount: Deactivated successfully. Dec 16 02:16:15.803870 kubelet[2693]: I1216 02:16:15.803678 2693 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 02:16:15.850968 systemd[1]: Created slice kubepods-burstable-pod02d3c6ac_9315_4900_b355_1efdfc7c7665.slice - libcontainer container kubepods-burstable-pod02d3c6ac_9315_4900_b355_1efdfc7c7665.slice. Dec 16 02:16:15.862283 systemd[1]: Created slice kubepods-burstable-podeca2cc20_8a5f_44b4_a022_1a39eef052f3.slice - libcontainer container kubepods-burstable-podeca2cc20_8a5f_44b4_a022_1a39eef052f3.slice. Dec 16 02:16:15.869893 systemd[1]: Created slice kubepods-besteffort-podc7d46623_447d_4a3b_a433_802c6ce8e063.slice - libcontainer container kubepods-besteffort-podc7d46623_447d_4a3b_a433_802c6ce8e063.slice. Dec 16 02:16:15.882920 systemd[1]: Created slice kubepods-besteffort-pod940a1e00_e3f0_45f9_b45b_33acce551ddd.slice - libcontainer container kubepods-besteffort-pod940a1e00_e3f0_45f9_b45b_33acce551ddd.slice. Dec 16 02:16:15.891463 systemd[1]: Created slice kubepods-besteffort-podda89ae5d_cbd0_4280_a9f3_2a8db9ab7544.slice - libcontainer container kubepods-besteffort-podda89ae5d_cbd0_4280_a9f3_2a8db9ab7544.slice. Dec 16 02:16:15.897801 systemd[1]: Created slice kubepods-besteffort-pod7aeba5eb_89d7_4d56_af0a_38d8908b6a09.slice - libcontainer container kubepods-besteffort-pod7aeba5eb_89d7_4d56_af0a_38d8908b6a09.slice. Dec 16 02:16:15.904319 systemd[1]: Created slice kubepods-besteffort-pod1740e8a2_cfc1_49fa_aafb_3ebfadc1402f.slice - libcontainer container kubepods-besteffort-pod1740e8a2_cfc1_49fa_aafb_3ebfadc1402f.slice. Dec 16 02:16:16.027242 kubelet[2693]: I1216 02:16:16.026970 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-whisker-ca-bundle\") pod \"whisker-658988df4f-v9x8p\" (UID: \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\") " pod="calico-system/whisker-658988df4f-v9x8p" Dec 16 02:16:16.027242 kubelet[2693]: I1216 02:16:16.027064 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7d46623-447d-4a3b-a433-802c6ce8e063-goldmane-ca-bundle\") pod \"goldmane-666569f655-lddqc\" (UID: \"c7d46623-447d-4a3b-a433-802c6ce8e063\") " pod="calico-system/goldmane-666569f655-lddqc" Dec 16 02:16:16.027242 kubelet[2693]: I1216 02:16:16.027112 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89cp\" (UniqueName: \"kubernetes.io/projected/940a1e00-e3f0-45f9-b45b-33acce551ddd-kube-api-access-d89cp\") pod \"calico-apiserver-79dfb47d67-2sm7d\" (UID: \"940a1e00-e3f0-45f9-b45b-33acce551ddd\") " pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" Dec 16 02:16:16.027242 kubelet[2693]: I1216 02:16:16.027133 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pscw5\" (UniqueName: \"kubernetes.io/projected/7aeba5eb-89d7-4d56-af0a-38d8908b6a09-kube-api-access-pscw5\") pod \"calico-kube-controllers-754cc876f4-x89dv\" (UID: \"7aeba5eb-89d7-4d56-af0a-38d8908b6a09\") " pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" Dec 16 02:16:16.027242 kubelet[2693]: I1216 02:16:16.027150 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6j6n\" (UniqueName: \"kubernetes.io/projected/02d3c6ac-9315-4900-b355-1efdfc7c7665-kube-api-access-j6j6n\") pod \"coredns-668d6bf9bc-86652\" (UID: \"02d3c6ac-9315-4900-b355-1efdfc7c7665\") " pod="kube-system/coredns-668d6bf9bc-86652" Dec 16 02:16:16.027537 kubelet[2693]: I1216 02:16:16.027168 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca2cc20-8a5f-44b4-a022-1a39eef052f3-config-volume\") pod \"coredns-668d6bf9bc-86t5b\" (UID: \"eca2cc20-8a5f-44b4-a022-1a39eef052f3\") " pod="kube-system/coredns-668d6bf9bc-86t5b" Dec 16 02:16:16.027537 kubelet[2693]: I1216 02:16:16.027207 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02d3c6ac-9315-4900-b355-1efdfc7c7665-config-volume\") pod \"coredns-668d6bf9bc-86652\" (UID: \"02d3c6ac-9315-4900-b355-1efdfc7c7665\") " pod="kube-system/coredns-668d6bf9bc-86652" Dec 16 02:16:16.027537 kubelet[2693]: I1216 02:16:16.027223 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1740e8a2-cfc1-49fa-aafb-3ebfadc1402f-calico-apiserver-certs\") pod \"calico-apiserver-79dfb47d67-g8xrk\" (UID: \"1740e8a2-cfc1-49fa-aafb-3ebfadc1402f\") " pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" Dec 16 02:16:16.027537 kubelet[2693]: I1216 02:16:16.027240 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw7zj\" (UniqueName: \"kubernetes.io/projected/1740e8a2-cfc1-49fa-aafb-3ebfadc1402f-kube-api-access-jw7zj\") pod \"calico-apiserver-79dfb47d67-g8xrk\" (UID: \"1740e8a2-cfc1-49fa-aafb-3ebfadc1402f\") " pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" Dec 16 02:16:16.027537 kubelet[2693]: I1216 02:16:16.027266 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d46623-447d-4a3b-a433-802c6ce8e063-config\") pod \"goldmane-666569f655-lddqc\" (UID: \"c7d46623-447d-4a3b-a433-802c6ce8e063\") " pod="calico-system/goldmane-666569f655-lddqc" Dec 16 02:16:16.027681 kubelet[2693]: I1216 02:16:16.027285 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2z62\" (UniqueName: \"kubernetes.io/projected/c7d46623-447d-4a3b-a433-802c6ce8e063-kube-api-access-g2z62\") pod \"goldmane-666569f655-lddqc\" (UID: \"c7d46623-447d-4a3b-a433-802c6ce8e063\") " pod="calico-system/goldmane-666569f655-lddqc" Dec 16 02:16:16.027681 kubelet[2693]: I1216 02:16:16.027300 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjpw7\" (UniqueName: \"kubernetes.io/projected/eca2cc20-8a5f-44b4-a022-1a39eef052f3-kube-api-access-gjpw7\") pod \"coredns-668d6bf9bc-86t5b\" (UID: \"eca2cc20-8a5f-44b4-a022-1a39eef052f3\") " pod="kube-system/coredns-668d6bf9bc-86t5b" Dec 16 02:16:16.027681 kubelet[2693]: I1216 02:16:16.027317 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-whisker-backend-key-pair\") pod \"whisker-658988df4f-v9x8p\" (UID: \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\") " pod="calico-system/whisker-658988df4f-v9x8p" Dec 16 02:16:16.027681 kubelet[2693]: I1216 02:16:16.027332 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg9wd\" (UniqueName: \"kubernetes.io/projected/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-kube-api-access-cg9wd\") pod \"whisker-658988df4f-v9x8p\" (UID: \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\") " pod="calico-system/whisker-658988df4f-v9x8p" Dec 16 02:16:16.027681 kubelet[2693]: I1216 02:16:16.027348 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c7d46623-447d-4a3b-a433-802c6ce8e063-goldmane-key-pair\") pod \"goldmane-666569f655-lddqc\" (UID: \"c7d46623-447d-4a3b-a433-802c6ce8e063\") " pod="calico-system/goldmane-666569f655-lddqc" Dec 16 02:16:16.027784 kubelet[2693]: I1216 02:16:16.027366 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/940a1e00-e3f0-45f9-b45b-33acce551ddd-calico-apiserver-certs\") pod \"calico-apiserver-79dfb47d67-2sm7d\" (UID: \"940a1e00-e3f0-45f9-b45b-33acce551ddd\") " pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" Dec 16 02:16:16.027784 kubelet[2693]: I1216 02:16:16.027384 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7aeba5eb-89d7-4d56-af0a-38d8908b6a09-tigera-ca-bundle\") pod \"calico-kube-controllers-754cc876f4-x89dv\" (UID: \"7aeba5eb-89d7-4d56-af0a-38d8908b6a09\") " pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" Dec 16 02:16:16.165108 kubelet[2693]: E1216 02:16:16.164873 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:16.166819 containerd[1550]: time="2025-12-16T02:16:16.166775140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86t5b,Uid:eca2cc20-8a5f-44b4-a022-1a39eef052f3,Namespace:kube-system,Attempt:0,}" Dec 16 02:16:16.181743 containerd[1550]: time="2025-12-16T02:16:16.181700873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lddqc,Uid:c7d46623-447d-4a3b-a433-802c6ce8e063,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:16.188360 containerd[1550]: time="2025-12-16T02:16:16.188313613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79dfb47d67-2sm7d,Uid:940a1e00-e3f0-45f9-b45b-33acce551ddd,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:16:16.196001 containerd[1550]: time="2025-12-16T02:16:16.195943065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658988df4f-v9x8p,Uid:da89ae5d-cbd0-4280-a9f3-2a8db9ab7544,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:16.203222 containerd[1550]: time="2025-12-16T02:16:16.203122569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754cc876f4-x89dv,Uid:7aeba5eb-89d7-4d56-af0a-38d8908b6a09,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:16.208891 containerd[1550]: time="2025-12-16T02:16:16.208837816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79dfb47d67-g8xrk,Uid:1740e8a2-cfc1-49fa-aafb-3ebfadc1402f,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:16:16.297880 containerd[1550]: time="2025-12-16T02:16:16.297723995Z" level=error msg="Failed to destroy network for sandbox \"66642953ca90217fcaa2028be09524d65ba60876f92629aa8d1690c42a4d641e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.305987 containerd[1550]: time="2025-12-16T02:16:16.305922850Z" level=error msg="Failed to destroy network for sandbox \"a9479e7e1810908e4d793264da4abe13ce9199f615451a2c656a99d1d44304cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.306121 containerd[1550]: time="2025-12-16T02:16:16.306082634Z" level=error msg="Failed to destroy network for sandbox \"da89156ae4fe7ef82d5a0f5c6d570d98bd14fa208c6ac1880c73cf2da356b133\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.306230 containerd[1550]: time="2025-12-16T02:16:16.306196131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658988df4f-v9x8p,Uid:da89ae5d-cbd0-4280-a9f3-2a8db9ab7544,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66642953ca90217fcaa2028be09524d65ba60876f92629aa8d1690c42a4d641e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.306522 kubelet[2693]: E1216 02:16:16.306480 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66642953ca90217fcaa2028be09524d65ba60876f92629aa8d1690c42a4d641e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.308934 containerd[1550]: time="2025-12-16T02:16:16.308884769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79dfb47d67-g8xrk,Uid:1740e8a2-cfc1-49fa-aafb-3ebfadc1402f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9479e7e1810908e4d793264da4abe13ce9199f615451a2c656a99d1d44304cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.309308 kubelet[2693]: E1216 02:16:16.309090 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9479e7e1810908e4d793264da4abe13ce9199f615451a2c656a99d1d44304cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.309308 kubelet[2693]: E1216 02:16:16.309153 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66642953ca90217fcaa2028be09524d65ba60876f92629aa8d1690c42a4d641e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658988df4f-v9x8p" Dec 16 02:16:16.309308 kubelet[2693]: E1216 02:16:16.309176 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9479e7e1810908e4d793264da4abe13ce9199f615451a2c656a99d1d44304cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" Dec 16 02:16:16.309308 kubelet[2693]: E1216 02:16:16.309196 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66642953ca90217fcaa2028be09524d65ba60876f92629aa8d1690c42a4d641e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658988df4f-v9x8p" Dec 16 02:16:16.309450 kubelet[2693]: E1216 02:16:16.309254 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-658988df4f-v9x8p_calico-system(da89ae5d-cbd0-4280-a9f3-2a8db9ab7544)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-658988df4f-v9x8p_calico-system(da89ae5d-cbd0-4280-a9f3-2a8db9ab7544)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66642953ca90217fcaa2028be09524d65ba60876f92629aa8d1690c42a4d641e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-658988df4f-v9x8p" podUID="da89ae5d-cbd0-4280-a9f3-2a8db9ab7544" Dec 16 02:16:16.309450 kubelet[2693]: E1216 02:16:16.309199 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9479e7e1810908e4d793264da4abe13ce9199f615451a2c656a99d1d44304cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" Dec 16 02:16:16.309450 kubelet[2693]: E1216 02:16:16.309327 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79dfb47d67-g8xrk_calico-apiserver(1740e8a2-cfc1-49fa-aafb-3ebfadc1402f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79dfb47d67-g8xrk_calico-apiserver(1740e8a2-cfc1-49fa-aafb-3ebfadc1402f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9479e7e1810908e4d793264da4abe13ce9199f615451a2c656a99d1d44304cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" podUID="1740e8a2-cfc1-49fa-aafb-3ebfadc1402f" Dec 16 02:16:16.311062 containerd[1550]: time="2025-12-16T02:16:16.311020806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lddqc,Uid:c7d46623-447d-4a3b-a433-802c6ce8e063,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da89156ae4fe7ef82d5a0f5c6d570d98bd14fa208c6ac1880c73cf2da356b133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.311289 kubelet[2693]: E1216 02:16:16.311262 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da89156ae4fe7ef82d5a0f5c6d570d98bd14fa208c6ac1880c73cf2da356b133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.311412 kubelet[2693]: E1216 02:16:16.311391 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da89156ae4fe7ef82d5a0f5c6d570d98bd14fa208c6ac1880c73cf2da356b133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lddqc" Dec 16 02:16:16.311520 kubelet[2693]: E1216 02:16:16.311472 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da89156ae4fe7ef82d5a0f5c6d570d98bd14fa208c6ac1880c73cf2da356b133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lddqc" Dec 16 02:16:16.311660 kubelet[2693]: E1216 02:16:16.311626 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lddqc_calico-system(c7d46623-447d-4a3b-a433-802c6ce8e063)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lddqc_calico-system(c7d46623-447d-4a3b-a433-802c6ce8e063)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da89156ae4fe7ef82d5a0f5c6d570d98bd14fa208c6ac1880c73cf2da356b133\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lddqc" podUID="c7d46623-447d-4a3b-a433-802c6ce8e063" Dec 16 02:16:16.317194 containerd[1550]: time="2025-12-16T02:16:16.317136233Z" level=error msg="Failed to destroy network for sandbox \"a2b2c9ff8d10a5a34f38d6b67f7a781bfaf842552899ff81f563f8ca0ff90bfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.317331 containerd[1550]: time="2025-12-16T02:16:16.317309658Z" level=error msg="Failed to destroy network for sandbox \"e8bd35eb0e69e3dcb7f8ae991b4e216dc49ad77d9802fb8a9320a341c5fb4b0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.324104 containerd[1550]: time="2025-12-16T02:16:16.324055739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754cc876f4-x89dv,Uid:7aeba5eb-89d7-4d56-af0a-38d8908b6a09,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bd35eb0e69e3dcb7f8ae991b4e216dc49ad77d9802fb8a9320a341c5fb4b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.324534 kubelet[2693]: E1216 02:16:16.324293 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bd35eb0e69e3dcb7f8ae991b4e216dc49ad77d9802fb8a9320a341c5fb4b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.324534 kubelet[2693]: E1216 02:16:16.324344 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bd35eb0e69e3dcb7f8ae991b4e216dc49ad77d9802fb8a9320a341c5fb4b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" Dec 16 02:16:16.324534 kubelet[2693]: E1216 02:16:16.324366 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8bd35eb0e69e3dcb7f8ae991b4e216dc49ad77d9802fb8a9320a341c5fb4b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" Dec 16 02:16:16.324706 kubelet[2693]: E1216 02:16:16.324404 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-754cc876f4-x89dv_calico-system(7aeba5eb-89d7-4d56-af0a-38d8908b6a09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-754cc876f4-x89dv_calico-system(7aeba5eb-89d7-4d56-af0a-38d8908b6a09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8bd35eb0e69e3dcb7f8ae991b4e216dc49ad77d9802fb8a9320a341c5fb4b0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" podUID="7aeba5eb-89d7-4d56-af0a-38d8908b6a09" Dec 16 02:16:16.325703 containerd[1550]: time="2025-12-16T02:16:16.325666938Z" level=error msg="Failed to destroy network for sandbox \"d81838893855a8b4fde0a40d3eafe533358d40a3c178552d18dc71cc4bbde5f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.327388 containerd[1550]: time="2025-12-16T02:16:16.327343706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86t5b,Uid:eca2cc20-8a5f-44b4-a022-1a39eef052f3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d81838893855a8b4fde0a40d3eafe533358d40a3c178552d18dc71cc4bbde5f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.327668 kubelet[2693]: E1216 02:16:16.327521 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d81838893855a8b4fde0a40d3eafe533358d40a3c178552d18dc71cc4bbde5f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.327668 kubelet[2693]: E1216 02:16:16.327559 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d81838893855a8b4fde0a40d3eafe533358d40a3c178552d18dc71cc4bbde5f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-86t5b" Dec 16 02:16:16.327791 kubelet[2693]: E1216 02:16:16.327575 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d81838893855a8b4fde0a40d3eafe533358d40a3c178552d18dc71cc4bbde5f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-86t5b" Dec 16 02:16:16.327873 kubelet[2693]: E1216 02:16:16.327850 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-86t5b_kube-system(eca2cc20-8a5f-44b4-a022-1a39eef052f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-86t5b_kube-system(eca2cc20-8a5f-44b4-a022-1a39eef052f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d81838893855a8b4fde0a40d3eafe533358d40a3c178552d18dc71cc4bbde5f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-86t5b" podUID="eca2cc20-8a5f-44b4-a022-1a39eef052f3" Dec 16 02:16:16.337080 containerd[1550]: time="2025-12-16T02:16:16.337035223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79dfb47d67-2sm7d,Uid:940a1e00-e3f0-45f9-b45b-33acce551ddd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b2c9ff8d10a5a34f38d6b67f7a781bfaf842552899ff81f563f8ca0ff90bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.337338 kubelet[2693]: E1216 02:16:16.337311 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b2c9ff8d10a5a34f38d6b67f7a781bfaf842552899ff81f563f8ca0ff90bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.337441 kubelet[2693]: E1216 02:16:16.337415 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b2c9ff8d10a5a34f38d6b67f7a781bfaf842552899ff81f563f8ca0ff90bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" Dec 16 02:16:16.337521 kubelet[2693]: E1216 02:16:16.337502 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2b2c9ff8d10a5a34f38d6b67f7a781bfaf842552899ff81f563f8ca0ff90bfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" Dec 16 02:16:16.337678 kubelet[2693]: E1216 02:16:16.337625 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79dfb47d67-2sm7d_calico-apiserver(940a1e00-e3f0-45f9-b45b-33acce551ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79dfb47d67-2sm7d_calico-apiserver(940a1e00-e3f0-45f9-b45b-33acce551ddd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2b2c9ff8d10a5a34f38d6b67f7a781bfaf842552899ff81f563f8ca0ff90bfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" podUID="940a1e00-e3f0-45f9-b45b-33acce551ddd" Dec 16 02:16:16.457340 kubelet[2693]: E1216 02:16:16.457209 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:16.458056 containerd[1550]: time="2025-12-16T02:16:16.458004718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86652,Uid:02d3c6ac-9315-4900-b355-1efdfc7c7665,Namespace:kube-system,Attempt:0,}" Dec 16 02:16:16.507070 containerd[1550]: time="2025-12-16T02:16:16.507027866Z" level=error msg="Failed to destroy network for sandbox \"a1df7cbecfd9c5b403354700f909d53f213a071c91287988043fb1c22e7553eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.509034 containerd[1550]: time="2025-12-16T02:16:16.508951712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86652,Uid:02d3c6ac-9315-4900-b355-1efdfc7c7665,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1df7cbecfd9c5b403354700f909d53f213a071c91287988043fb1c22e7553eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.509267 kubelet[2693]: E1216 02:16:16.509197 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1df7cbecfd9c5b403354700f909d53f213a071c91287988043fb1c22e7553eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.509314 kubelet[2693]: E1216 02:16:16.509286 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1df7cbecfd9c5b403354700f909d53f213a071c91287988043fb1c22e7553eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-86652" Dec 16 02:16:16.509314 kubelet[2693]: E1216 02:16:16.509306 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1df7cbecfd9c5b403354700f909d53f213a071c91287988043fb1c22e7553eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-86652" Dec 16 02:16:16.509382 kubelet[2693]: E1216 02:16:16.509354 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-86652_kube-system(02d3c6ac-9315-4900-b355-1efdfc7c7665)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-86652_kube-system(02d3c6ac-9315-4900-b355-1efdfc7c7665)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1df7cbecfd9c5b403354700f909d53f213a071c91287988043fb1c22e7553eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-86652" podUID="02d3c6ac-9315-4900-b355-1efdfc7c7665" Dec 16 02:16:16.525951 systemd[1]: Created slice kubepods-besteffort-podfdb43fea_30e5_4f4c_8d7c_cf0f6c47a9ff.slice - libcontainer container kubepods-besteffort-podfdb43fea_30e5_4f4c_8d7c_cf0f6c47a9ff.slice. Dec 16 02:16:16.528136 containerd[1550]: time="2025-12-16T02:16:16.527922804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xd2kv,Uid:fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:16.572724 containerd[1550]: time="2025-12-16T02:16:16.572674879Z" level=error msg="Failed to destroy network for sandbox \"64975a7844348bfadf5d6651c0321dc459b12a5139012bfec21dc857dd515db0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.574404 containerd[1550]: time="2025-12-16T02:16:16.574364890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xd2kv,Uid:fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64975a7844348bfadf5d6651c0321dc459b12a5139012bfec21dc857dd515db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.574678 kubelet[2693]: E1216 02:16:16.574576 2693 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64975a7844348bfadf5d6651c0321dc459b12a5139012bfec21dc857dd515db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 02:16:16.574678 kubelet[2693]: E1216 02:16:16.574654 2693 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64975a7844348bfadf5d6651c0321dc459b12a5139012bfec21dc857dd515db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xd2kv" Dec 16 02:16:16.574749 kubelet[2693]: E1216 02:16:16.574684 2693 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64975a7844348bfadf5d6651c0321dc459b12a5139012bfec21dc857dd515db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xd2kv" Dec 16 02:16:16.574749 kubelet[2693]: E1216 02:16:16.574735 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xd2kv_calico-system(fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xd2kv_calico-system(fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64975a7844348bfadf5d6651c0321dc459b12a5139012bfec21dc857dd515db0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:16.654994 kubelet[2693]: E1216 02:16:16.654658 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:16.656670 containerd[1550]: time="2025-12-16T02:16:16.656387210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 02:16:17.143495 systemd[1]: run-netns-cni\x2dc7e1a666\x2ddbfd\x2d6c7d\x2d4a1d\x2dd188208ad9c4.mount: Deactivated successfully. Dec 16 02:16:17.143613 systemd[1]: run-netns-cni\x2dc9f600d9\x2d7bb8\x2d73bf\x2d327a\x2dd98409a96eea.mount: Deactivated successfully. Dec 16 02:16:20.657812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001581366.mount: Deactivated successfully. Dec 16 02:16:20.990450 containerd[1550]: time="2025-12-16T02:16:20.989690600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 16 02:16:20.993238 containerd[1550]: time="2025-12-16T02:16:20.993197699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.336767762s" Dec 16 02:16:20.993238 containerd[1550]: time="2025-12-16T02:16:20.993233463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 02:16:21.010595 containerd[1550]: time="2025-12-16T02:16:21.010532211Z" level=info msg="CreateContainer within sandbox \"0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 02:16:21.016256 containerd[1550]: time="2025-12-16T02:16:21.016191730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:21.018275 containerd[1550]: time="2025-12-16T02:16:21.018236150Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:21.018895 containerd[1550]: time="2025-12-16T02:16:21.018837666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 02:16:21.025610 containerd[1550]: time="2025-12-16T02:16:21.025051016Z" level=info msg="Container cfff04e5908b76eda9cbe5d106178e3ce7b0223e4abcb5e5180972c9e05ea535: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:16:21.041681 containerd[1550]: time="2025-12-16T02:16:21.041618920Z" level=info msg="CreateContainer within sandbox \"0ea54b99b8f2abdc6e0b44d25785d8aa99609530143b79be364458a648fa7ff3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cfff04e5908b76eda9cbe5d106178e3ce7b0223e4abcb5e5180972c9e05ea535\"" Dec 16 02:16:21.042238 containerd[1550]: time="2025-12-16T02:16:21.042191153Z" level=info msg="StartContainer for \"cfff04e5908b76eda9cbe5d106178e3ce7b0223e4abcb5e5180972c9e05ea535\"" Dec 16 02:16:21.044397 containerd[1550]: time="2025-12-16T02:16:21.044343546Z" level=info msg="connecting to shim cfff04e5908b76eda9cbe5d106178e3ce7b0223e4abcb5e5180972c9e05ea535" address="unix:///run/containerd/s/d4adc010cbaafae89ea314472705035513c178a432c87a8a313f0b8b23d79287" protocol=ttrpc version=3 Dec 16 02:16:21.072903 systemd[1]: Started cri-containerd-cfff04e5908b76eda9cbe5d106178e3ce7b0223e4abcb5e5180972c9e05ea535.scope - libcontainer container cfff04e5908b76eda9cbe5d106178e3ce7b0223e4abcb5e5180972c9e05ea535. Dec 16 02:16:21.129000 audit: BPF prog-id=177 op=LOAD Dec 16 02:16:21.132065 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 16 02:16:21.132144 kernel: audit: type=1334 audit(1765851381.129:583): prog-id=177 op=LOAD Dec 16 02:16:21.132177 kernel: audit: type=1300 audit(1765851381.129:583): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3300 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:21.129000 audit[3825]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3300 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:21.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666630346535393038623736656461396362653564313036313738 Dec 16 02:16:21.138421 kernel: audit: type=1327 audit(1765851381.129:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666630346535393038623736656461396362653564313036313738 Dec 16 02:16:21.138491 kernel: audit: type=1334 audit(1765851381.129:584): prog-id=178 op=LOAD Dec 16 02:16:21.129000 audit: BPF prog-id=178 op=LOAD Dec 16 02:16:21.129000 audit[3825]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3300 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:21.142481 kernel: audit: type=1300 audit(1765851381.129:584): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3300 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:21.142554 kernel: audit: type=1327 audit(1765851381.129:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666630346535393038623736656461396362653564313036313738 Dec 16 02:16:21.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666630346535393038623736656461396362653564313036313738 Dec 16 02:16:21.130000 audit: BPF prog-id=178 op=UNLOAD Dec 16 02:16:21.130000 audit[3825]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:21.149492 kernel: audit: type=1334 audit(1765851381.130:585): prog-id=178 op=UNLOAD Dec 16 02:16:21.149560 kernel: audit: type=1300 audit(1765851381.130:585): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:21.149601 kernel: audit: type=1327 audit(1765851381.130:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666630346535393038623736656461396362653564313036313738 Dec 16 02:16:21.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666630346535393038623736656461396362653564313036313738 Dec 16 02:16:21.130000 audit: BPF prog-id=177 op=UNLOAD Dec 16 02:16:21.153605 kernel: audit: type=1334 audit(1765851381.130:586): prog-id=177 op=UNLOAD Dec 16 02:16:21.130000 audit[3825]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3300 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:21.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666630346535393038623736656461396362653564313036313738 Dec 16 02:16:21.130000 audit: BPF prog-id=179 op=LOAD Dec 16 02:16:21.130000 audit[3825]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3300 pid=3825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:21.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366666630346535393038623736656461396362653564313036313738 Dec 16 02:16:21.172617 containerd[1550]: time="2025-12-16T02:16:21.172508225Z" level=info msg="StartContainer for \"cfff04e5908b76eda9cbe5d106178e3ce7b0223e4abcb5e5180972c9e05ea535\" returns successfully" Dec 16 02:16:21.294173 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 02:16:21.294307 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 02:16:21.460606 kubelet[2693]: I1216 02:16:21.459761 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg9wd\" (UniqueName: \"kubernetes.io/projected/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-kube-api-access-cg9wd\") pod \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\" (UID: \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\") " Dec 16 02:16:21.460606 kubelet[2693]: I1216 02:16:21.459822 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-whisker-ca-bundle\") pod \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\" (UID: \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\") " Dec 16 02:16:21.460606 kubelet[2693]: I1216 02:16:21.459843 2693 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-whisker-backend-key-pair\") pod \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\" (UID: \"da89ae5d-cbd0-4280-a9f3-2a8db9ab7544\") " Dec 16 02:16:21.465048 kubelet[2693]: I1216 02:16:21.464866 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "da89ae5d-cbd0-4280-a9f3-2a8db9ab7544" (UID: "da89ae5d-cbd0-4280-a9f3-2a8db9ab7544"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 02:16:21.469249 kubelet[2693]: I1216 02:16:21.469147 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-kube-api-access-cg9wd" (OuterVolumeSpecName: "kube-api-access-cg9wd") pod "da89ae5d-cbd0-4280-a9f3-2a8db9ab7544" (UID: "da89ae5d-cbd0-4280-a9f3-2a8db9ab7544"). InnerVolumeSpecName "kube-api-access-cg9wd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 02:16:21.469334 kubelet[2693]: I1216 02:16:21.469277 2693 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "da89ae5d-cbd0-4280-a9f3-2a8db9ab7544" (UID: "da89ae5d-cbd0-4280-a9f3-2a8db9ab7544"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 02:16:21.532638 systemd[1]: Removed slice kubepods-besteffort-podda89ae5d_cbd0_4280_a9f3_2a8db9ab7544.slice - libcontainer container kubepods-besteffort-podda89ae5d_cbd0_4280_a9f3_2a8db9ab7544.slice. Dec 16 02:16:21.560762 kubelet[2693]: I1216 02:16:21.560707 2693 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cg9wd\" (UniqueName: \"kubernetes.io/projected/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-kube-api-access-cg9wd\") on node \"localhost\" DevicePath \"\"" Dec 16 02:16:21.560762 kubelet[2693]: I1216 02:16:21.560743 2693 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 02:16:21.560762 kubelet[2693]: I1216 02:16:21.560752 2693 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 02:16:21.658777 systemd[1]: var-lib-kubelet-pods-da89ae5d\x2dcbd0\x2d4280\x2da9f3\x2d2a8db9ab7544-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcg9wd.mount: Deactivated successfully. Dec 16 02:16:21.658867 systemd[1]: var-lib-kubelet-pods-da89ae5d\x2dcbd0\x2d4280\x2da9f3\x2d2a8db9ab7544-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 02:16:21.690904 kubelet[2693]: E1216 02:16:21.690720 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:21.709318 kubelet[2693]: I1216 02:16:21.709247 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l8tw2" podStartSLOduration=1.357754359 podStartE2EDuration="13.709229676s" podCreationTimestamp="2025-12-16 02:16:08 +0000 UTC" firstStartedPulling="2025-12-16 02:16:08.647207939 +0000 UTC m=+25.208358682" lastFinishedPulling="2025-12-16 02:16:20.998683296 +0000 UTC m=+37.559833999" observedRunningTime="2025-12-16 02:16:21.708824024 +0000 UTC m=+38.269974767" watchObservedRunningTime="2025-12-16 02:16:21.709229676 +0000 UTC m=+38.270380379" Dec 16 02:16:21.757684 systemd[1]: Created slice kubepods-besteffort-pod0f118220_9d7c_4c48_a0bc_35415c01901e.slice - libcontainer container kubepods-besteffort-pod0f118220_9d7c_4c48_a0bc_35415c01901e.slice. Dec 16 02:16:21.762999 kubelet[2693]: I1216 02:16:21.762956 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kq48\" (UniqueName: \"kubernetes.io/projected/0f118220-9d7c-4c48-a0bc-35415c01901e-kube-api-access-6kq48\") pod \"whisker-cd49d4685-v8mf6\" (UID: \"0f118220-9d7c-4c48-a0bc-35415c01901e\") " pod="calico-system/whisker-cd49d4685-v8mf6" Dec 16 02:16:21.763191 kubelet[2693]: I1216 02:16:21.763011 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f118220-9d7c-4c48-a0bc-35415c01901e-whisker-ca-bundle\") pod \"whisker-cd49d4685-v8mf6\" (UID: \"0f118220-9d7c-4c48-a0bc-35415c01901e\") " pod="calico-system/whisker-cd49d4685-v8mf6" Dec 16 02:16:21.763191 kubelet[2693]: I1216 02:16:21.763050 2693 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f118220-9d7c-4c48-a0bc-35415c01901e-whisker-backend-key-pair\") pod \"whisker-cd49d4685-v8mf6\" (UID: \"0f118220-9d7c-4c48-a0bc-35415c01901e\") " pod="calico-system/whisker-cd49d4685-v8mf6" Dec 16 02:16:22.064277 containerd[1550]: time="2025-12-16T02:16:22.064170140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd49d4685-v8mf6,Uid:0f118220-9d7c-4c48-a0bc-35415c01901e,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:22.316639 systemd-networkd[1468]: cali0a8505ad26e: Link UP Dec 16 02:16:22.317724 systemd-networkd[1468]: cali0a8505ad26e: Gained carrier Dec 16 02:16:22.337890 containerd[1550]: 2025-12-16 02:16:22.156 [INFO][3892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 02:16:22.337890 containerd[1550]: 2025-12-16 02:16:22.187 [INFO][3892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cd49d4685--v8mf6-eth0 whisker-cd49d4685- calico-system 0f118220-9d7c-4c48-a0bc-35415c01901e 964 0 2025-12-16 02:16:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cd49d4685 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cd49d4685-v8mf6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0a8505ad26e [] [] }} ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Namespace="calico-system" Pod="whisker-cd49d4685-v8mf6" WorkloadEndpoint="localhost-k8s-whisker--cd49d4685--v8mf6-" Dec 16 02:16:22.337890 containerd[1550]: 2025-12-16 02:16:22.187 [INFO][3892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Namespace="calico-system" Pod="whisker-cd49d4685-v8mf6" WorkloadEndpoint="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" Dec 16 02:16:22.337890 containerd[1550]: 2025-12-16 02:16:22.248 [INFO][3907] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" HandleID="k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Workload="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.248 [INFO][3907] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" HandleID="k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Workload="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002be5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cd49d4685-v8mf6", "timestamp":"2025-12-16 02:16:22.248774103 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.248 [INFO][3907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.249 [INFO][3907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.249 [INFO][3907] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.259 [INFO][3907] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" host="localhost" Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.266 [INFO][3907] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.272 [INFO][3907] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.276 [INFO][3907] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.280 [INFO][3907] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:22.338203 containerd[1550]: 2025-12-16 02:16:22.280 [INFO][3907] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" host="localhost" Dec 16 02:16:22.338451 containerd[1550]: 2025-12-16 02:16:22.286 [INFO][3907] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb Dec 16 02:16:22.338451 containerd[1550]: 2025-12-16 02:16:22.293 [INFO][3907] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" host="localhost" Dec 16 02:16:22.338451 containerd[1550]: 2025-12-16 02:16:22.302 [INFO][3907] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" host="localhost" Dec 16 02:16:22.338451 containerd[1550]: 2025-12-16 02:16:22.302 [INFO][3907] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" host="localhost" Dec 16 02:16:22.338451 containerd[1550]: 2025-12-16 02:16:22.302 [INFO][3907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:16:22.338451 containerd[1550]: 2025-12-16 02:16:22.302 [INFO][3907] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" HandleID="k8s-pod-network.90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Workload="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" Dec 16 02:16:22.338582 containerd[1550]: 2025-12-16 02:16:22.306 [INFO][3892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Namespace="calico-system" Pod="whisker-cd49d4685-v8mf6" WorkloadEndpoint="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cd49d4685--v8mf6-eth0", GenerateName:"whisker-cd49d4685-", Namespace:"calico-system", SelfLink:"", UID:"0f118220-9d7c-4c48-a0bc-35415c01901e", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cd49d4685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cd49d4685-v8mf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0a8505ad26e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:22.338582 containerd[1550]: 2025-12-16 02:16:22.306 [INFO][3892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Namespace="calico-system" Pod="whisker-cd49d4685-v8mf6" WorkloadEndpoint="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" Dec 16 02:16:22.338687 containerd[1550]: 2025-12-16 02:16:22.307 [INFO][3892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a8505ad26e ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Namespace="calico-system" Pod="whisker-cd49d4685-v8mf6" WorkloadEndpoint="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" Dec 16 02:16:22.338687 containerd[1550]: 2025-12-16 02:16:22.317 [INFO][3892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Namespace="calico-system" Pod="whisker-cd49d4685-v8mf6" WorkloadEndpoint="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" Dec 16 02:16:22.338727 containerd[1550]: 2025-12-16 02:16:22.318 [INFO][3892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Namespace="calico-system" Pod="whisker-cd49d4685-v8mf6" WorkloadEndpoint="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cd49d4685--v8mf6-eth0", GenerateName:"whisker-cd49d4685-", Namespace:"calico-system", SelfLink:"", UID:"0f118220-9d7c-4c48-a0bc-35415c01901e", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cd49d4685", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb", Pod:"whisker-cd49d4685-v8mf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0a8505ad26e", MAC:"66:b9:f0:39:40:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:22.338772 containerd[1550]: 2025-12-16 02:16:22.335 [INFO][3892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" Namespace="calico-system" Pod="whisker-cd49d4685-v8mf6" WorkloadEndpoint="localhost-k8s-whisker--cd49d4685--v8mf6-eth0" Dec 16 02:16:22.385134 containerd[1550]: time="2025-12-16T02:16:22.385068779Z" level=info msg="connecting to shim 90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb" address="unix:///run/containerd/s/c363b9fa6e59120840964508402604469fb6f501b985d6ff28108e393bab2f5d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:22.406799 systemd[1]: Started cri-containerd-90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb.scope - libcontainer container 90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb. Dec 16 02:16:22.415000 audit: BPF prog-id=180 op=LOAD Dec 16 02:16:22.415000 audit: BPF prog-id=181 op=LOAD Dec 16 02:16:22.415000 audit[3942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3931 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623936396533393630346230343362666566333332356261376666 Dec 16 02:16:22.415000 audit: BPF prog-id=181 op=UNLOAD Dec 16 02:16:22.415000 audit[3942]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623936396533393630346230343362666566333332356261376666 Dec 16 02:16:22.415000 audit: BPF prog-id=182 op=LOAD Dec 16 02:16:22.415000 audit[3942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3931 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623936396533393630346230343362666566333332356261376666 Dec 16 02:16:22.415000 audit: BPF prog-id=183 op=LOAD Dec 16 02:16:22.415000 audit[3942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3931 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623936396533393630346230343362666566333332356261376666 Dec 16 02:16:22.415000 audit: BPF prog-id=183 op=UNLOAD Dec 16 02:16:22.415000 audit[3942]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623936396533393630346230343362666566333332356261376666 Dec 16 02:16:22.415000 audit: BPF prog-id=182 op=UNLOAD Dec 16 02:16:22.415000 audit[3942]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3931 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623936396533393630346230343362666566333332356261376666 Dec 16 02:16:22.415000 audit: BPF prog-id=184 op=LOAD Dec 16 02:16:22.415000 audit[3942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3931 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623936396533393630346230343362666566333332356261376666 Dec 16 02:16:22.418285 systemd-resolved[1253]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 02:16:22.437649 containerd[1550]: time="2025-12-16T02:16:22.437616390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd49d4685-v8mf6,Uid:0f118220-9d7c-4c48-a0bc-35415c01901e,Namespace:calico-system,Attempt:0,} returns sandbox id \"90b969e39604b043bfef3325ba7ffe4e84eba57e56f4955014a68a0b05db00eb\"" Dec 16 02:16:22.441507 containerd[1550]: time="2025-12-16T02:16:22.440536671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:16:22.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.94:22-10.0.0.1:41164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:22.482156 systemd[1]: Started sshd@7-10.0.0.94:22-10.0.0.1:41164.service - OpenSSH per-connection server daemon (10.0.0.1:41164). Dec 16 02:16:22.565000 audit[3970]: USER_ACCT pid=3970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:22.566368 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 41164 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:22.566000 audit[3970]: CRED_ACQ pid=3970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:22.566000 audit[3970]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2ff57a0 a2=3 a3=0 items=0 ppid=1 pid=3970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.566000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:22.568111 sshd-session[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:22.574690 systemd-logind[1526]: New session 9 of user core. Dec 16 02:16:22.583801 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 02:16:22.585000 audit[3970]: USER_START pid=3970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:22.586000 audit[3974]: CRED_ACQ pid=3974 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:22.648510 containerd[1550]: time="2025-12-16T02:16:22.647720904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:22.670253 containerd[1550]: time="2025-12-16T02:16:22.669501194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:16:22.670253 containerd[1550]: time="2025-12-16T02:16:22.669628730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:22.670400 kubelet[2693]: E1216 02:16:22.669788 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:16:22.670400 kubelet[2693]: E1216 02:16:22.669836 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:16:22.672064 kubelet[2693]: E1216 02:16:22.671948 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f0d1ca7912f0477b87dcb1de770b774b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6kq48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd49d4685-v8mf6_calico-system(0f118220-9d7c-4c48-a0bc-35415c01901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:22.708717 containerd[1550]: time="2025-12-16T02:16:22.708482530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:16:22.722967 kubelet[2693]: I1216 02:16:22.722934 2693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 02:16:22.723353 kubelet[2693]: E1216 02:16:22.723326 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:22.727428 sshd[3974]: Connection closed by 10.0.0.1 port 41164 Dec 16 02:16:22.727819 sshd-session[3970]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:22.728000 audit[3970]: USER_END pid=3970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:22.729000 audit[3970]: CRED_DISP pid=3970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:22.732992 systemd[1]: sshd@7-10.0.0.94:22-10.0.0.1:41164.service: Deactivated successfully. Dec 16 02:16:22.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.94:22-10.0.0.1:41164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:22.737020 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 02:16:22.740646 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Dec 16 02:16:22.741984 systemd-logind[1526]: Removed session 9. Dec 16 02:16:22.892000 audit: BPF prog-id=185 op=LOAD Dec 16 02:16:22.892000 audit[4117]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd9ce5358 a2=98 a3=ffffd9ce5348 items=0 ppid=4022 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.892000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:16:22.892000 audit: BPF prog-id=185 op=UNLOAD Dec 16 02:16:22.892000 audit[4117]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd9ce5328 a3=0 items=0 ppid=4022 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.892000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:16:22.892000 audit: BPF prog-id=186 op=LOAD Dec 16 02:16:22.892000 audit[4117]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd9ce5208 a2=74 a3=95 items=0 ppid=4022 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.892000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:16:22.892000 audit: BPF prog-id=186 op=UNLOAD Dec 16 02:16:22.892000 audit[4117]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4022 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.892000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:16:22.892000 audit: BPF prog-id=187 op=LOAD Dec 16 02:16:22.892000 audit[4117]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd9ce5238 a2=40 a3=ffffd9ce5268 items=0 ppid=4022 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.892000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:16:22.892000 audit: BPF prog-id=187 op=UNLOAD Dec 16 02:16:22.892000 audit[4117]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd9ce5268 items=0 ppid=4022 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.892000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 02:16:22.894000 audit: BPF prog-id=188 op=LOAD Dec 16 02:16:22.894000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff87df368 a2=98 a3=fffff87df358 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.894000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:22.894000 audit: BPF prog-id=188 op=UNLOAD Dec 16 02:16:22.894000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff87df338 a3=0 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.894000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:22.894000 audit: BPF prog-id=189 op=LOAD Dec 16 02:16:22.894000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff87deff8 a2=74 a3=95 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.894000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:22.895000 audit: BPF prog-id=189 op=UNLOAD Dec 16 02:16:22.895000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.895000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:22.895000 audit: BPF prog-id=190 op=LOAD Dec 16 02:16:22.895000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff87df058 a2=94 a3=2 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.895000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:22.895000 audit: BPF prog-id=190 op=UNLOAD Dec 16 02:16:22.895000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.895000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:22.907244 containerd[1550]: time="2025-12-16T02:16:22.907174793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:22.908349 containerd[1550]: time="2025-12-16T02:16:22.908309133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:16:22.908495 containerd[1550]: time="2025-12-16T02:16:22.908382182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:22.908574 kubelet[2693]: E1216 02:16:22.908530 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:16:22.908681 kubelet[2693]: E1216 02:16:22.908581 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:16:22.909057 kubelet[2693]: E1216 02:16:22.908782 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kq48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd49d4685-v8mf6_calico-system(0f118220-9d7c-4c48-a0bc-35415c01901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:22.910241 kubelet[2693]: E1216 02:16:22.910199 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd49d4685-v8mf6" podUID="0f118220-9d7c-4c48-a0bc-35415c01901e" Dec 16 02:16:22.993000 audit: BPF prog-id=191 op=LOAD Dec 16 02:16:22.993000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff87df018 a2=40 a3=fffff87df048 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:22.993000 audit: BPF prog-id=191 op=UNLOAD Dec 16 02:16:22.993000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff87df048 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:22.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.003000 audit: BPF prog-id=192 op=LOAD Dec 16 02:16:23.003000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff87df028 a2=94 a3=4 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.003000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.003000 audit: BPF prog-id=192 op=UNLOAD Dec 16 02:16:23.003000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.003000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.003000 audit: BPF prog-id=193 op=LOAD Dec 16 02:16:23.003000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff87dee68 a2=94 a3=5 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.003000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.004000 audit: BPF prog-id=193 op=UNLOAD Dec 16 02:16:23.004000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.004000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.004000 audit: BPF prog-id=194 op=LOAD Dec 16 02:16:23.004000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff87df098 a2=94 a3=6 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.004000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.004000 audit: BPF prog-id=194 op=UNLOAD Dec 16 02:16:23.004000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.004000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.004000 audit: BPF prog-id=195 op=LOAD Dec 16 02:16:23.004000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff87de868 a2=94 a3=83 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.004000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.004000 audit: BPF prog-id=196 op=LOAD Dec 16 02:16:23.004000 audit[4118]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff87de628 a2=94 a3=2 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.004000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.004000 audit: BPF prog-id=196 op=UNLOAD Dec 16 02:16:23.004000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.004000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.005000 audit: BPF prog-id=195 op=UNLOAD Dec 16 02:16:23.005000 audit[4118]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=15bae620 a3=15ba1b00 items=0 ppid=4022 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.005000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 02:16:23.015000 audit: BPF prog-id=197 op=LOAD Dec 16 02:16:23.015000 audit[4121]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe6bb4078 a2=98 a3=ffffe6bb4068 items=0 ppid=4022 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.015000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:16:23.015000 audit: BPF prog-id=197 op=UNLOAD Dec 16 02:16:23.015000 audit[4121]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe6bb4048 a3=0 items=0 ppid=4022 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.015000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:16:23.015000 audit: BPF prog-id=198 op=LOAD Dec 16 02:16:23.015000 audit[4121]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe6bb3f28 a2=74 a3=95 items=0 ppid=4022 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.015000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:16:23.015000 audit: BPF prog-id=198 op=UNLOAD Dec 16 02:16:23.015000 audit[4121]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4022 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.015000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:16:23.015000 audit: BPF prog-id=199 op=LOAD Dec 16 02:16:23.015000 audit[4121]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe6bb3f58 a2=40 a3=ffffe6bb3f88 items=0 ppid=4022 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.015000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:16:23.015000 audit: BPF prog-id=199 op=UNLOAD Dec 16 02:16:23.015000 audit[4121]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe6bb3f88 items=0 ppid=4022 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.015000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 02:16:23.077361 systemd-networkd[1468]: vxlan.calico: Link UP Dec 16 02:16:23.077367 systemd-networkd[1468]: vxlan.calico: Gained carrier Dec 16 02:16:23.093000 audit: BPF prog-id=200 op=LOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea086428 a2=98 a3=ffffea086418 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=200 op=UNLOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffea0863f8 a3=0 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=201 op=LOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea086108 a2=74 a3=95 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=201 op=UNLOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=202 op=LOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea086168 a2=94 a3=2 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=202 op=UNLOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=203 op=LOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea085fe8 a2=40 a3=ffffea086018 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=203 op=UNLOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffea086018 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=204 op=LOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea086138 a2=94 a3=b7 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.093000 audit: BPF prog-id=204 op=UNLOAD Dec 16 02:16:23.093000 audit[4146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.093000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.094000 audit: BPF prog-id=205 op=LOAD Dec 16 02:16:23.094000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea0857e8 a2=94 a3=2 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.094000 audit: BPF prog-id=205 op=UNLOAD Dec 16 02:16:23.094000 audit[4146]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.094000 audit: BPF prog-id=206 op=LOAD Dec 16 02:16:23.094000 audit[4146]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea085978 a2=94 a3=30 items=0 ppid=4022 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.094000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 02:16:23.098000 audit: BPF prog-id=207 op=LOAD Dec 16 02:16:23.098000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff6526128 a2=98 a3=fffff6526118 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.098000 audit: BPF prog-id=207 op=UNLOAD Dec 16 02:16:23.098000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff65260f8 a3=0 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.098000 audit: BPF prog-id=208 op=LOAD Dec 16 02:16:23.098000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6525db8 a2=74 a3=95 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.098000 audit: BPF prog-id=208 op=UNLOAD Dec 16 02:16:23.098000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.098000 audit: BPF prog-id=209 op=LOAD Dec 16 02:16:23.098000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6525e18 a2=94 a3=2 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.098000 audit: BPF prog-id=209 op=UNLOAD Dec 16 02:16:23.098000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.194000 audit: BPF prog-id=210 op=LOAD Dec 16 02:16:23.194000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff6525dd8 a2=40 a3=fffff6525e08 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.194000 audit: BPF prog-id=210 op=UNLOAD Dec 16 02:16:23.194000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff6525e08 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.204000 audit: BPF prog-id=211 op=LOAD Dec 16 02:16:23.204000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff6525de8 a2=94 a3=4 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.204000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.205000 audit: BPF prog-id=211 op=UNLOAD Dec 16 02:16:23.205000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.205000 audit: BPF prog-id=212 op=LOAD Dec 16 02:16:23.205000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff6525c28 a2=94 a3=5 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.205000 audit: BPF prog-id=212 op=UNLOAD Dec 16 02:16:23.205000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.205000 audit: BPF prog-id=213 op=LOAD Dec 16 02:16:23.205000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff6525e58 a2=94 a3=6 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.205000 audit: BPF prog-id=213 op=UNLOAD Dec 16 02:16:23.205000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.205000 audit: BPF prog-id=214 op=LOAD Dec 16 02:16:23.205000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff6525628 a2=94 a3=83 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.205000 audit: BPF prog-id=215 op=LOAD Dec 16 02:16:23.205000 audit[4150]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff65253e8 a2=94 a3=2 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.205000 audit: BPF prog-id=215 op=UNLOAD Dec 16 02:16:23.205000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.205000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.206000 audit: BPF prog-id=214 op=UNLOAD Dec 16 02:16:23.206000 audit[4150]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=26405620 a3=263f8b00 items=0 ppid=4022 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.206000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 02:16:23.218000 audit: BPF prog-id=206 op=UNLOAD Dec 16 02:16:23.218000 audit[4022]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40010ca040 a2=0 a3=0 items=0 ppid=3990 pid=4022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.218000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 02:16:23.259000 audit[4177]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4177 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:23.259000 audit[4177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffcbdc8f40 a2=0 a3=ffffa696bfa8 items=0 ppid=4022 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.259000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:23.261000 audit[4180]: NETFILTER_CFG table=nat:126 family=2 entries=15 op=nft_register_chain pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:23.261000 audit[4180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffccf13ae0 a2=0 a3=ffffbbf0efa8 items=0 ppid=4022 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.261000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:23.266000 audit[4179]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4179 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:23.266000 audit[4179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff8f068c0 a2=0 a3=ffffb83a1fa8 items=0 ppid=4022 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.266000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:23.269000 audit[4182]: NETFILTER_CFG table=filter:128 family=2 entries=94 op=nft_register_chain pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:23.269000 audit[4182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc3d8ed70 a2=0 a3=ffffb5d02fa8 items=0 ppid=4022 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.269000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:23.523039 kubelet[2693]: I1216 02:16:23.522997 2693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da89ae5d-cbd0-4280-a9f3-2a8db9ab7544" path="/var/lib/kubelet/pods/da89ae5d-cbd0-4280-a9f3-2a8db9ab7544/volumes" Dec 16 02:16:23.725455 kubelet[2693]: E1216 02:16:23.725380 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd49d4685-v8mf6" podUID="0f118220-9d7c-4c48-a0bc-35415c01901e" Dec 16 02:16:23.755000 audit[4194]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:23.755000 audit[4194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffbcb20a0 a2=0 a3=1 items=0 ppid=2803 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:23.770000 audit[4194]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:23.770000 audit[4194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffbcb20a0 a2=0 a3=1 items=0 ppid=2803 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:23.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:24.211826 systemd-networkd[1468]: cali0a8505ad26e: Gained IPv6LL Dec 16 02:16:24.467715 systemd-networkd[1468]: vxlan.calico: Gained IPv6LL Dec 16 02:16:27.520939 kubelet[2693]: E1216 02:16:27.520746 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:27.521331 containerd[1550]: time="2025-12-16T02:16:27.521136810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86652,Uid:02d3c6ac-9315-4900-b355-1efdfc7c7665,Namespace:kube-system,Attempt:0,}" Dec 16 02:16:27.649738 systemd-networkd[1468]: calif4a198b0341: Link UP Dec 16 02:16:27.650167 systemd-networkd[1468]: calif4a198b0341: Gained carrier Dec 16 02:16:27.661364 containerd[1550]: 2025-12-16 02:16:27.582 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--86652-eth0 coredns-668d6bf9bc- kube-system 02d3c6ac-9315-4900-b355-1efdfc7c7665 895 0 2025-12-16 02:15:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-86652 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif4a198b0341 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Namespace="kube-system" Pod="coredns-668d6bf9bc-86652" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86652-" Dec 16 02:16:27.661364 containerd[1550]: 2025-12-16 02:16:27.582 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Namespace="kube-system" Pod="coredns-668d6bf9bc-86652" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86652-eth0" Dec 16 02:16:27.661364 containerd[1550]: 2025-12-16 02:16:27.610 [INFO][4212] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" HandleID="k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Workload="localhost-k8s-coredns--668d6bf9bc--86652-eth0" Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.610 [INFO][4212] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" HandleID="k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Workload="localhost-k8s-coredns--668d6bf9bc--86652-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-86652", "timestamp":"2025-12-16 02:16:27.610733985 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.610 [INFO][4212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.610 [INFO][4212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.610 [INFO][4212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.620 [INFO][4212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" host="localhost" Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.625 [INFO][4212] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.630 [INFO][4212] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.632 [INFO][4212] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.634 [INFO][4212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:27.661564 containerd[1550]: 2025-12-16 02:16:27.634 [INFO][4212] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" host="localhost" Dec 16 02:16:27.661921 containerd[1550]: 2025-12-16 02:16:27.635 [INFO][4212] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267 Dec 16 02:16:27.661921 containerd[1550]: 2025-12-16 02:16:27.640 [INFO][4212] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" host="localhost" Dec 16 02:16:27.661921 containerd[1550]: 2025-12-16 02:16:27.645 [INFO][4212] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" host="localhost" Dec 16 02:16:27.661921 containerd[1550]: 2025-12-16 02:16:27.645 [INFO][4212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" host="localhost" Dec 16 02:16:27.661921 containerd[1550]: 2025-12-16 02:16:27.645 [INFO][4212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:16:27.661921 containerd[1550]: 2025-12-16 02:16:27.645 [INFO][4212] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" HandleID="k8s-pod-network.d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Workload="localhost-k8s-coredns--668d6bf9bc--86652-eth0" Dec 16 02:16:27.662155 containerd[1550]: 2025-12-16 02:16:27.647 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Namespace="kube-system" Pod="coredns-668d6bf9bc-86652" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86652-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--86652-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02d3c6ac-9315-4900-b355-1efdfc7c7665", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 15, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-86652", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4a198b0341", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:27.662241 containerd[1550]: 2025-12-16 02:16:27.647 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Namespace="kube-system" Pod="coredns-668d6bf9bc-86652" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86652-eth0" Dec 16 02:16:27.662241 containerd[1550]: 2025-12-16 02:16:27.647 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4a198b0341 ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Namespace="kube-system" Pod="coredns-668d6bf9bc-86652" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86652-eth0" Dec 16 02:16:27.662241 containerd[1550]: 2025-12-16 02:16:27.650 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Namespace="kube-system" Pod="coredns-668d6bf9bc-86652" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86652-eth0" Dec 16 02:16:27.662399 containerd[1550]: 2025-12-16 02:16:27.650 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Namespace="kube-system" Pod="coredns-668d6bf9bc-86652" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86652-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--86652-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02d3c6ac-9315-4900-b355-1efdfc7c7665", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 15, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267", Pod:"coredns-668d6bf9bc-86652", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4a198b0341", MAC:"32:e8:79:01:78:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:27.662399 containerd[1550]: 2025-12-16 02:16:27.657 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" Namespace="kube-system" Pod="coredns-668d6bf9bc-86652" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86652-eth0" Dec 16 02:16:27.674000 audit[4231]: NETFILTER_CFG table=filter:131 family=2 entries=42 op=nft_register_chain pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:27.680659 kernel: kauditd_printk_skb: 242 callbacks suppressed Dec 16 02:16:27.680793 kernel: audit: type=1325 audit(1765851387.674:673): table=filter:131 family=2 entries=42 op=nft_register_chain pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:27.680819 kernel: audit: type=1300 audit(1765851387.674:673): arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffcf865be0 a2=0 a3=ffff8f521fa8 items=0 ppid=4022 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.674000 audit[4231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffcf865be0 a2=0 a3=ffff8f521fa8 items=0 ppid=4022 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.674000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:27.688530 kernel: audit: type=1327 audit(1765851387.674:673): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:27.690073 containerd[1550]: time="2025-12-16T02:16:27.689864458Z" level=info msg="connecting to shim d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267" address="unix:///run/containerd/s/b80f199a61642de302a2e76791d249b147521221f78947fc8e8a1f54a7986ab5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:27.724793 systemd[1]: Started cri-containerd-d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267.scope - libcontainer container d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267. Dec 16 02:16:27.735213 systemd[1]: Started sshd@8-10.0.0.94:22-10.0.0.1:41180.service - OpenSSH per-connection server daemon (10.0.0.1:41180). Dec 16 02:16:27.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.94:22-10.0.0.1:41180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:27.739628 kernel: audit: type=1130 audit(1765851387.733:674): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.94:22-10.0.0.1:41180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:27.738000 audit: BPF prog-id=216 op=LOAD Dec 16 02:16:27.741616 kernel: audit: type=1334 audit(1765851387.738:675): prog-id=216 op=LOAD Dec 16 02:16:27.741664 kernel: audit: type=1334 audit(1765851387.739:676): prog-id=217 op=LOAD Dec 16 02:16:27.739000 audit: BPF prog-id=217 op=LOAD Dec 16 02:16:27.739000 audit[4251]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353161343036303161303936396262613436396361656234353665 Dec 16 02:16:27.739000 audit: BPF prog-id=217 op=UNLOAD Dec 16 02:16:27.742605 kernel: audit: type=1300 audit(1765851387.739:676): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.742637 kernel: audit: type=1327 audit(1765851387.739:676): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353161343036303161303936396262613436396361656234353665 Dec 16 02:16:27.742659 kernel: audit: type=1334 audit(1765851387.739:677): prog-id=217 op=UNLOAD Dec 16 02:16:27.742674 kernel: audit: type=1300 audit(1765851387.739:677): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.739000 audit[4251]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353161343036303161303936396262613436396361656234353665 Dec 16 02:16:27.740000 audit: BPF prog-id=218 op=LOAD Dec 16 02:16:27.740000 audit[4251]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353161343036303161303936396262613436396361656234353665 Dec 16 02:16:27.744000 audit: BPF prog-id=219 op=LOAD Dec 16 02:16:27.744000 audit[4251]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353161343036303161303936396262613436396361656234353665 Dec 16 02:16:27.747000 audit: BPF prog-id=219 op=UNLOAD Dec 16 02:16:27.747000 audit[4251]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353161343036303161303936396262613436396361656234353665 Dec 16 02:16:27.747000 audit: BPF prog-id=218 op=UNLOAD Dec 16 02:16:27.747000 audit[4251]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353161343036303161303936396262613436396361656234353665 Dec 16 02:16:27.749000 audit: BPF prog-id=220 op=LOAD Dec 16 02:16:27.749000 audit[4251]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4239 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6434353161343036303161303936396262613436396361656234353665 Dec 16 02:16:27.751050 systemd-resolved[1253]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 02:16:27.776529 containerd[1550]: time="2025-12-16T02:16:27.776426862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86652,Uid:02d3c6ac-9315-4900-b355-1efdfc7c7665,Namespace:kube-system,Attempt:0,} returns sandbox id \"d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267\"" Dec 16 02:16:27.778668 kubelet[2693]: E1216 02:16:27.778627 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:27.782562 containerd[1550]: time="2025-12-16T02:16:27.782531769Z" level=info msg="CreateContainer within sandbox \"d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:16:27.798528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585119480.mount: Deactivated successfully. Dec 16 02:16:27.800347 containerd[1550]: time="2025-12-16T02:16:27.800249542Z" level=info msg="Container cb7b7777af063982bbb10b6794a5ca8c2a14a20cd1f4765eb85b600562bd1e93: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:16:27.805442 containerd[1550]: time="2025-12-16T02:16:27.805377421Z" level=info msg="CreateContainer within sandbox \"d451a40601a0969bba469caeb456e81b1c996f802b7a14c625b443a77f346267\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb7b7777af063982bbb10b6794a5ca8c2a14a20cd1f4765eb85b600562bd1e93\"" Dec 16 02:16:27.807088 containerd[1550]: time="2025-12-16T02:16:27.807060605Z" level=info msg="StartContainer for \"cb7b7777af063982bbb10b6794a5ca8c2a14a20cd1f4765eb85b600562bd1e93\"" Dec 16 02:16:27.808454 containerd[1550]: time="2025-12-16T02:16:27.808373468Z" level=info msg="connecting to shim cb7b7777af063982bbb10b6794a5ca8c2a14a20cd1f4765eb85b600562bd1e93" address="unix:///run/containerd/s/b80f199a61642de302a2e76791d249b147521221f78947fc8e8a1f54a7986ab5" protocol=ttrpc version=3 Dec 16 02:16:27.830000 audit[4271]: USER_ACCT pid=4271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:27.832102 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 41180 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:27.831000 audit[4271]: CRED_ACQ pid=4271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:27.832000 audit[4271]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecbbcfc0 a2=3 a3=0 items=0 ppid=1 pid=4271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.832000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:27.834249 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:27.836872 systemd[1]: Started cri-containerd-cb7b7777af063982bbb10b6794a5ca8c2a14a20cd1f4765eb85b600562bd1e93.scope - libcontainer container cb7b7777af063982bbb10b6794a5ca8c2a14a20cd1f4765eb85b600562bd1e93. Dec 16 02:16:27.842649 systemd-logind[1526]: New session 10 of user core. Dec 16 02:16:27.848812 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 02:16:27.852000 audit: BPF prog-id=221 op=LOAD Dec 16 02:16:27.852000 audit[4271]: USER_START pid=4271 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:27.852000 audit: BPF prog-id=222 op=LOAD Dec 16 02:16:27.852000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4239 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376237373737616630363339383262626231306236373934613563 Dec 16 02:16:27.852000 audit: BPF prog-id=222 op=UNLOAD Dec 16 02:16:27.852000 audit[4281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4239 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376237373737616630363339383262626231306236373934613563 Dec 16 02:16:27.853000 audit: BPF prog-id=223 op=LOAD Dec 16 02:16:27.853000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4239 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376237373737616630363339383262626231306236373934613563 Dec 16 02:16:27.853000 audit: BPF prog-id=224 op=LOAD Dec 16 02:16:27.853000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4239 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376237373737616630363339383262626231306236373934613563 Dec 16 02:16:27.853000 audit: BPF prog-id=224 op=UNLOAD Dec 16 02:16:27.853000 audit[4281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4239 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376237373737616630363339383262626231306236373934613563 Dec 16 02:16:27.853000 audit: BPF prog-id=223 op=UNLOAD Dec 16 02:16:27.853000 audit[4281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4239 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376237373737616630363339383262626231306236373934613563 Dec 16 02:16:27.853000 audit: BPF prog-id=225 op=LOAD Dec 16 02:16:27.853000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4239 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:27.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376237373737616630363339383262626231306236373934613563 Dec 16 02:16:27.854000 audit[4301]: CRED_ACQ pid=4301 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:27.882419 containerd[1550]: time="2025-12-16T02:16:27.882381182Z" level=info msg="StartContainer for \"cb7b7777af063982bbb10b6794a5ca8c2a14a20cd1f4765eb85b600562bd1e93\" returns successfully" Dec 16 02:16:28.009173 sshd[4301]: Connection closed by 10.0.0.1 port 41180 Dec 16 02:16:28.009463 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:28.009000 audit[4271]: USER_END pid=4271 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:28.009000 audit[4271]: CRED_DISP pid=4271 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:28.013578 systemd[1]: sshd@8-10.0.0.94:22-10.0.0.1:41180.service: Deactivated successfully. Dec 16 02:16:28.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.94:22-10.0.0.1:41180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:28.015480 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 02:16:28.016354 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Dec 16 02:16:28.017302 systemd-logind[1526]: Removed session 10. Dec 16 02:16:28.521234 containerd[1550]: time="2025-12-16T02:16:28.521196807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79dfb47d67-g8xrk,Uid:1740e8a2-cfc1-49fa-aafb-3ebfadc1402f,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:16:28.637742 systemd-networkd[1468]: califf2a8cfa411: Link UP Dec 16 02:16:28.638938 systemd-networkd[1468]: califf2a8cfa411: Gained carrier Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.565 [INFO][4335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0 calico-apiserver-79dfb47d67- calico-apiserver 1740e8a2-cfc1-49fa-aafb-3ebfadc1402f 900 0 2025-12-16 02:16:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79dfb47d67 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79dfb47d67-g8xrk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf2a8cfa411 [] [] }} ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-g8xrk" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.566 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-g8xrk" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.592 [INFO][4348] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" HandleID="k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Workload="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.592 [INFO][4348] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" HandleID="k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Workload="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003220a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79dfb47d67-g8xrk", "timestamp":"2025-12-16 02:16:28.592260432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.592 [INFO][4348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.592 [INFO][4348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.592 [INFO][4348] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.601 [INFO][4348] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.607 [INFO][4348] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.612 [INFO][4348] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.615 [INFO][4348] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.617 [INFO][4348] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.617 [INFO][4348] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.619 [INFO][4348] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1 Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.623 [INFO][4348] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.630 [INFO][4348] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.630 [INFO][4348] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" host="localhost" Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.630 [INFO][4348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:16:28.664577 containerd[1550]: 2025-12-16 02:16:28.630 [INFO][4348] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" HandleID="k8s-pod-network.da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Workload="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" Dec 16 02:16:28.666040 containerd[1550]: 2025-12-16 02:16:28.632 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-g8xrk" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0", GenerateName:"calico-apiserver-79dfb47d67-", Namespace:"calico-apiserver", SelfLink:"", UID:"1740e8a2-cfc1-49fa-aafb-3ebfadc1402f", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79dfb47d67", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79dfb47d67-g8xrk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2a8cfa411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:28.666040 containerd[1550]: 2025-12-16 02:16:28.632 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-g8xrk" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" Dec 16 02:16:28.666040 containerd[1550]: 2025-12-16 02:16:28.632 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf2a8cfa411 ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-g8xrk" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" Dec 16 02:16:28.666040 containerd[1550]: 2025-12-16 02:16:28.638 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-g8xrk" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" Dec 16 02:16:28.666040 containerd[1550]: 2025-12-16 02:16:28.639 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-g8xrk" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0", GenerateName:"calico-apiserver-79dfb47d67-", Namespace:"calico-apiserver", SelfLink:"", UID:"1740e8a2-cfc1-49fa-aafb-3ebfadc1402f", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79dfb47d67", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1", Pod:"calico-apiserver-79dfb47d67-g8xrk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2a8cfa411", MAC:"9a:26:4c:8a:8b:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:28.666040 containerd[1550]: 2025-12-16 02:16:28.661 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-g8xrk" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--g8xrk-eth0" Dec 16 02:16:28.676000 audit[4364]: NETFILTER_CFG table=filter:132 family=2 entries=60 op=nft_register_chain pid=4364 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:28.676000 audit[4364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32248 a0=3 a1=fffff81dca60 a2=0 a3=ffffad4a6fa8 items=0 ppid=4022 pid=4364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.676000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:28.690212 containerd[1550]: time="2025-12-16T02:16:28.690170922Z" level=info msg="connecting to shim da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1" address="unix:///run/containerd/s/20913a10bd9bdebb7b9a24d2d534b97c9cd72e67b0b875fedcd2aa9f045df5c0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:28.712792 systemd[1]: Started cri-containerd-da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1.scope - libcontainer container da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1. Dec 16 02:16:28.721000 audit: BPF prog-id=226 op=LOAD Dec 16 02:16:28.721000 audit: BPF prog-id=227 op=LOAD Dec 16 02:16:28.721000 audit[4384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4373 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461373136373761376363653937323836306139353931643038653232 Dec 16 02:16:28.721000 audit: BPF prog-id=227 op=UNLOAD Dec 16 02:16:28.721000 audit[4384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4373 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461373136373761376363653937323836306139353931643038653232 Dec 16 02:16:28.721000 audit: BPF prog-id=228 op=LOAD Dec 16 02:16:28.721000 audit[4384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4373 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461373136373761376363653937323836306139353931643038653232 Dec 16 02:16:28.721000 audit: BPF prog-id=229 op=LOAD Dec 16 02:16:28.721000 audit[4384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4373 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461373136373761376363653937323836306139353931643038653232 Dec 16 02:16:28.721000 audit: BPF prog-id=229 op=UNLOAD Dec 16 02:16:28.721000 audit[4384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4373 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461373136373761376363653937323836306139353931643038653232 Dec 16 02:16:28.721000 audit: BPF prog-id=228 op=UNLOAD Dec 16 02:16:28.721000 audit[4384]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4373 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461373136373761376363653937323836306139353931643038653232 Dec 16 02:16:28.721000 audit: BPF prog-id=230 op=LOAD Dec 16 02:16:28.721000 audit[4384]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4373 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.721000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461373136373761376363653937323836306139353931643038653232 Dec 16 02:16:28.724207 systemd-resolved[1253]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 02:16:28.742031 kubelet[2693]: E1216 02:16:28.742000 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:28.750641 containerd[1550]: time="2025-12-16T02:16:28.750558488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79dfb47d67-g8xrk,Uid:1740e8a2-cfc1-49fa-aafb-3ebfadc1402f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da71677a7cce972860a9591d08e22d16d555e67119f0ef8d86bbc9b06e93cbe1\"" Dec 16 02:16:28.755242 containerd[1550]: time="2025-12-16T02:16:28.755212184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:16:28.766000 audit[4410]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:28.768944 kubelet[2693]: I1216 02:16:28.768887 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-86652" podStartSLOduration=38.768860921 podStartE2EDuration="38.768860921s" podCreationTimestamp="2025-12-16 02:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:16:28.757670167 +0000 UTC m=+45.318820950" watchObservedRunningTime="2025-12-16 02:16:28.768860921 +0000 UTC m=+45.330011664" Dec 16 02:16:28.766000 audit[4410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe9f7fa60 a2=0 a3=1 items=0 ppid=2803 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:28.770000 audit[4410]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4410 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:28.770000 audit[4410]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe9f7fa60 a2=0 a3=1 items=0 ppid=2803 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.770000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:28.791000 audit[4412]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:28.791000 audit[4412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdf1014c0 a2=0 a3=1 items=0 ppid=2803 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:28.800000 audit[4412]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:28.800000 audit[4412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdf1014c0 a2=0 a3=1 items=0 ppid=2803 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:28.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:28.962561 containerd[1550]: time="2025-12-16T02:16:28.962473066Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:28.963495 containerd[1550]: time="2025-12-16T02:16:28.963455931Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:16:28.963603 containerd[1550]: time="2025-12-16T02:16:28.963536579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:28.963800 kubelet[2693]: E1216 02:16:28.963745 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:16:28.963800 kubelet[2693]: E1216 02:16:28.963794 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:16:28.963976 kubelet[2693]: E1216 02:16:28.963932 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jw7zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79dfb47d67-g8xrk_calico-apiserver(1740e8a2-cfc1-49fa-aafb-3ebfadc1402f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:28.965504 kubelet[2693]: E1216 02:16:28.965442 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" podUID="1740e8a2-cfc1-49fa-aafb-3ebfadc1402f" Dec 16 02:16:29.331783 systemd-networkd[1468]: calif4a198b0341: Gained IPv6LL Dec 16 02:16:29.521640 containerd[1550]: time="2025-12-16T02:16:29.521579308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754cc876f4-x89dv,Uid:7aeba5eb-89d7-4d56-af0a-38d8908b6a09,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:29.650926 systemd-networkd[1468]: cali1a6eee03d0a: Link UP Dec 16 02:16:29.651084 systemd-networkd[1468]: cali1a6eee03d0a: Gained carrier Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.566 [INFO][4413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0 calico-kube-controllers-754cc876f4- calico-system 7aeba5eb-89d7-4d56-af0a-38d8908b6a09 897 0 2025-12-16 02:16:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:754cc876f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-754cc876f4-x89dv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1a6eee03d0a [] [] }} ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Namespace="calico-system" Pod="calico-kube-controllers-754cc876f4-x89dv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.567 [INFO][4413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Namespace="calico-system" Pod="calico-kube-controllers-754cc876f4-x89dv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.611 [INFO][4429] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" HandleID="k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Workload="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.611 [INFO][4429] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" HandleID="k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Workload="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-754cc876f4-x89dv", "timestamp":"2025-12-16 02:16:29.611331129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.611 [INFO][4429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.611 [INFO][4429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.611 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.621 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.626 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.630 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.632 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.634 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.634 [INFO][4429] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.636 [INFO][4429] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.640 [INFO][4429] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.645 [INFO][4429] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.646 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" host="localhost" Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.646 [INFO][4429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:16:29.663336 containerd[1550]: 2025-12-16 02:16:29.646 [INFO][4429] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" HandleID="k8s-pod-network.f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Workload="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" Dec 16 02:16:29.664528 containerd[1550]: 2025-12-16 02:16:29.649 [INFO][4413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Namespace="calico-system" Pod="calico-kube-controllers-754cc876f4-x89dv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0", GenerateName:"calico-kube-controllers-754cc876f4-", Namespace:"calico-system", SelfLink:"", UID:"7aeba5eb-89d7-4d56-af0a-38d8908b6a09", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754cc876f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-754cc876f4-x89dv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a6eee03d0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:29.664528 containerd[1550]: 2025-12-16 02:16:29.649 [INFO][4413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Namespace="calico-system" Pod="calico-kube-controllers-754cc876f4-x89dv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" Dec 16 02:16:29.664528 containerd[1550]: 2025-12-16 02:16:29.649 [INFO][4413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a6eee03d0a ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Namespace="calico-system" Pod="calico-kube-controllers-754cc876f4-x89dv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" Dec 16 02:16:29.664528 containerd[1550]: 2025-12-16 02:16:29.650 [INFO][4413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Namespace="calico-system" Pod="calico-kube-controllers-754cc876f4-x89dv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" Dec 16 02:16:29.664528 containerd[1550]: 2025-12-16 02:16:29.651 [INFO][4413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Namespace="calico-system" Pod="calico-kube-controllers-754cc876f4-x89dv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0", GenerateName:"calico-kube-controllers-754cc876f4-", Namespace:"calico-system", SelfLink:"", UID:"7aeba5eb-89d7-4d56-af0a-38d8908b6a09", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754cc876f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da", Pod:"calico-kube-controllers-754cc876f4-x89dv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1a6eee03d0a", MAC:"b6:42:e3:f0:04:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:29.664528 containerd[1550]: 2025-12-16 02:16:29.661 [INFO][4413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" Namespace="calico-system" Pod="calico-kube-controllers-754cc876f4-x89dv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--754cc876f4--x89dv-eth0" Dec 16 02:16:29.673000 audit[4445]: NETFILTER_CFG table=filter:137 family=2 entries=40 op=nft_register_chain pid=4445 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:29.673000 audit[4445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20748 a0=3 a1=ffffee52fc50 a2=0 a3=ffffb3fc1fa8 items=0 ppid=4022 pid=4445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.673000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:29.684138 containerd[1550]: time="2025-12-16T02:16:29.684098254Z" level=info msg="connecting to shim f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da" address="unix:///run/containerd/s/96df1c91e287a8e2423857dc9223e1a179dddb5c339f3d0e7ff8d1cefaba74d3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:29.720843 systemd[1]: Started cri-containerd-f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da.scope - libcontainer container f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da. Dec 16 02:16:29.730000 audit: BPF prog-id=231 op=LOAD Dec 16 02:16:29.731000 audit: BPF prog-id=232 op=LOAD Dec 16 02:16:29.731000 audit[4467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4454 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634303936336161393534356632376265636464386163663765343134 Dec 16 02:16:29.731000 audit: BPF prog-id=232 op=UNLOAD Dec 16 02:16:29.731000 audit[4467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4454 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634303936336161393534356632376265636464386163663765343134 Dec 16 02:16:29.732000 audit: BPF prog-id=233 op=LOAD Dec 16 02:16:29.732000 audit[4467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4454 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634303936336161393534356632376265636464386163663765343134 Dec 16 02:16:29.732000 audit: BPF prog-id=234 op=LOAD Dec 16 02:16:29.732000 audit[4467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4454 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634303936336161393534356632376265636464386163663765343134 Dec 16 02:16:29.732000 audit: BPF prog-id=234 op=UNLOAD Dec 16 02:16:29.732000 audit[4467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4454 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634303936336161393534356632376265636464386163663765343134 Dec 16 02:16:29.732000 audit: BPF prog-id=233 op=UNLOAD Dec 16 02:16:29.732000 audit[4467]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4454 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634303936336161393534356632376265636464386163663765343134 Dec 16 02:16:29.732000 audit: BPF prog-id=235 op=LOAD Dec 16 02:16:29.732000 audit[4467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4454 pid=4467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634303936336161393534356632376265636464386163663765343134 Dec 16 02:16:29.735999 systemd-resolved[1253]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 02:16:29.749755 kubelet[2693]: E1216 02:16:29.749708 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:29.752927 kubelet[2693]: E1216 02:16:29.752444 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" podUID="1740e8a2-cfc1-49fa-aafb-3ebfadc1402f" Dec 16 02:16:29.806934 containerd[1550]: time="2025-12-16T02:16:29.806890007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754cc876f4-x89dv,Uid:7aeba5eb-89d7-4d56-af0a-38d8908b6a09,Namespace:calico-system,Attempt:0,} returns sandbox id \"f40963aa9545f27becdd8acf7e4146f13e7a9e60334426f40f11dc0f85a7c5da\"" Dec 16 02:16:29.808575 containerd[1550]: time="2025-12-16T02:16:29.808550061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:16:29.842000 audit[4494]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:29.842000 audit[4494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc836e2c0 a2=0 a3=1 items=0 ppid=2803 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:29.848000 audit[4494]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=4494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:29.848000 audit[4494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc836e2c0 a2=0 a3=1 items=0 ppid=2803 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:29.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:30.048544 containerd[1550]: time="2025-12-16T02:16:30.048497961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:30.059827 containerd[1550]: time="2025-12-16T02:16:30.059758875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:16:30.059994 containerd[1550]: time="2025-12-16T02:16:30.059832002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:30.060150 kubelet[2693]: E1216 02:16:30.060093 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:16:30.060200 kubelet[2693]: E1216 02:16:30.060146 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:16:30.060740 kubelet[2693]: E1216 02:16:30.060678 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pscw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-754cc876f4-x89dv_calico-system(7aeba5eb-89d7-4d56-af0a-38d8908b6a09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:30.061895 kubelet[2693]: E1216 02:16:30.061856 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" podUID="7aeba5eb-89d7-4d56-af0a-38d8908b6a09" Dec 16 02:16:30.483757 systemd-networkd[1468]: califf2a8cfa411: Gained IPv6LL Dec 16 02:16:30.521357 containerd[1550]: time="2025-12-16T02:16:30.521309753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lddqc,Uid:c7d46623-447d-4a3b-a433-802c6ce8e063,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:30.521661 containerd[1550]: time="2025-12-16T02:16:30.521333515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xd2kv,Uid:fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff,Namespace:calico-system,Attempt:0,}" Dec 16 02:16:30.666767 systemd-networkd[1468]: calic66591853c1: Link UP Dec 16 02:16:30.666970 systemd-networkd[1468]: calic66591853c1: Gained carrier Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.587 [INFO][4500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xd2kv-eth0 csi-node-driver- calico-system fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff 768 0 2025-12-16 02:16:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xd2kv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic66591853c1 [] [] }} ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Namespace="calico-system" Pod="csi-node-driver-xd2kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd2kv-" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.587 [INFO][4500] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Namespace="calico-system" Pod="csi-node-driver-xd2kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd2kv-eth0" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.613 [INFO][4528] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" HandleID="k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Workload="localhost-k8s-csi--node--driver--xd2kv-eth0" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.613 [INFO][4528] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" HandleID="k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Workload="localhost-k8s-csi--node--driver--xd2kv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001373f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xd2kv", "timestamp":"2025-12-16 02:16:30.613800547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.613 [INFO][4528] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.614 [INFO][4528] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.614 [INFO][4528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.624 [INFO][4528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.629 [INFO][4528] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.636 [INFO][4528] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.638 [INFO][4528] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.640 [INFO][4528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.640 [INFO][4528] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.642 [INFO][4528] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326 Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.646 [INFO][4528] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.651 [INFO][4528] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.651 [INFO][4528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" host="localhost" Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.651 [INFO][4528] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:16:30.683847 containerd[1550]: 2025-12-16 02:16:30.651 [INFO][4528] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" HandleID="k8s-pod-network.730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Workload="localhost-k8s-csi--node--driver--xd2kv-eth0" Dec 16 02:16:30.684872 containerd[1550]: 2025-12-16 02:16:30.655 [INFO][4500] cni-plugin/k8s.go 418: Populated endpoint ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Namespace="calico-system" Pod="csi-node-driver-xd2kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd2kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xd2kv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xd2kv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic66591853c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:30.684872 containerd[1550]: 2025-12-16 02:16:30.655 [INFO][4500] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Namespace="calico-system" Pod="csi-node-driver-xd2kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd2kv-eth0" Dec 16 02:16:30.684872 containerd[1550]: 2025-12-16 02:16:30.655 [INFO][4500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic66591853c1 ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Namespace="calico-system" Pod="csi-node-driver-xd2kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd2kv-eth0" Dec 16 02:16:30.684872 containerd[1550]: 2025-12-16 02:16:30.659 [INFO][4500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Namespace="calico-system" Pod="csi-node-driver-xd2kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd2kv-eth0" Dec 16 02:16:30.684872 containerd[1550]: 2025-12-16 02:16:30.660 [INFO][4500] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Namespace="calico-system" Pod="csi-node-driver-xd2kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd2kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xd2kv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326", Pod:"csi-node-driver-xd2kv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic66591853c1", MAC:"26:96:eb:14:95:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:30.684872 containerd[1550]: 2025-12-16 02:16:30.679 [INFO][4500] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" Namespace="calico-system" Pod="csi-node-driver-xd2kv" WorkloadEndpoint="localhost-k8s-csi--node--driver--xd2kv-eth0" Dec 16 02:16:30.693000 audit[4552]: NETFILTER_CFG table=filter:140 family=2 entries=44 op=nft_register_chain pid=4552 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:30.693000 audit[4552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21936 a0=3 a1=fffff99c85e0 a2=0 a3=ffff94d58fa8 items=0 ppid=4022 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.693000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:30.720604 containerd[1550]: time="2025-12-16T02:16:30.720264412Z" level=info msg="connecting to shim 730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326" address="unix:///run/containerd/s/5c92fe39583f3970118f7b8736f7fa30cba1953eed7a0b6f67c3d39c542609c4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:30.750808 systemd[1]: Started cri-containerd-730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326.scope - libcontainer container 730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326. Dec 16 02:16:30.758549 kubelet[2693]: E1216 02:16:30.758220 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:30.758549 kubelet[2693]: E1216 02:16:30.758388 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" podUID="7aeba5eb-89d7-4d56-af0a-38d8908b6a09" Dec 16 02:16:30.758549 kubelet[2693]: E1216 02:16:30.758495 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" podUID="1740e8a2-cfc1-49fa-aafb-3ebfadc1402f" Dec 16 02:16:30.767416 systemd-networkd[1468]: cali6ee078dad5a: Link UP Dec 16 02:16:30.767576 systemd-networkd[1468]: cali6ee078dad5a: Gained carrier Dec 16 02:16:30.771000 audit: BPF prog-id=236 op=LOAD Dec 16 02:16:30.773000 audit: BPF prog-id=237 op=LOAD Dec 16 02:16:30.773000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4561 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303535326334353932336361336163653335613131303065636431 Dec 16 02:16:30.773000 audit: BPF prog-id=237 op=UNLOAD Dec 16 02:16:30.773000 audit[4572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4561 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303535326334353932336361336163653335613131303065636431 Dec 16 02:16:30.774000 audit: BPF prog-id=238 op=LOAD Dec 16 02:16:30.774000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4561 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.774000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303535326334353932336361336163653335613131303065636431 Dec 16 02:16:30.775000 audit: BPF prog-id=239 op=LOAD Dec 16 02:16:30.775000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4561 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303535326334353932336361336163653335613131303065636431 Dec 16 02:16:30.775000 audit: BPF prog-id=239 op=UNLOAD Dec 16 02:16:30.775000 audit[4572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4561 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303535326334353932336361336163653335613131303065636431 Dec 16 02:16:30.775000 audit: BPF prog-id=238 op=UNLOAD Dec 16 02:16:30.775000 audit[4572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4561 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303535326334353932336361336163653335613131303065636431 Dec 16 02:16:30.775000 audit: BPF prog-id=240 op=LOAD Dec 16 02:16:30.775000 audit[4572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4561 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733303535326334353932336361336163653335613131303065636431 Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.587 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--lddqc-eth0 goldmane-666569f655- calico-system c7d46623-447d-4a3b-a433-802c6ce8e063 898 0 2025-12-16 02:16:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-lddqc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6ee078dad5a [] [] }} ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Namespace="calico-system" Pod="goldmane-666569f655-lddqc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lddqc-" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.587 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Namespace="calico-system" Pod="goldmane-666569f655-lddqc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lddqc-eth0" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.623 [INFO][4529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" HandleID="k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Workload="localhost-k8s-goldmane--666569f655--lddqc-eth0" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.623 [INFO][4529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" HandleID="k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Workload="localhost-k8s-goldmane--666569f655--lddqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b95d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-lddqc", "timestamp":"2025-12-16 02:16:30.623470617 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.623 [INFO][4529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.651 [INFO][4529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.651 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.725 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.730 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.736 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.739 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.744 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.744 [INFO][4529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.746 [INFO][4529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798 Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.750 [INFO][4529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.758 [INFO][4529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.759 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" host="localhost" Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.759 [INFO][4529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:16:30.792120 containerd[1550]: 2025-12-16 02:16:30.759 [INFO][4529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" HandleID="k8s-pod-network.e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Workload="localhost-k8s-goldmane--666569f655--lddqc-eth0" Dec 16 02:16:30.793282 containerd[1550]: 2025-12-16 02:16:30.763 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Namespace="calico-system" Pod="goldmane-666569f655-lddqc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lddqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lddqc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c7d46623-447d-4a3b-a433-802c6ce8e063", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-lddqc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ee078dad5a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:30.793282 containerd[1550]: 2025-12-16 02:16:30.764 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Namespace="calico-system" Pod="goldmane-666569f655-lddqc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lddqc-eth0" Dec 16 02:16:30.793282 containerd[1550]: 2025-12-16 02:16:30.764 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ee078dad5a ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Namespace="calico-system" Pod="goldmane-666569f655-lddqc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lddqc-eth0" Dec 16 02:16:30.793282 containerd[1550]: 2025-12-16 02:16:30.766 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Namespace="calico-system" Pod="goldmane-666569f655-lddqc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lddqc-eth0" Dec 16 02:16:30.793282 containerd[1550]: 2025-12-16 02:16:30.766 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Namespace="calico-system" Pod="goldmane-666569f655-lddqc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lddqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lddqc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c7d46623-447d-4a3b-a433-802c6ce8e063", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798", Pod:"goldmane-666569f655-lddqc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6ee078dad5a", MAC:"e2:df:0c:0e:24:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:30.793282 containerd[1550]: 2025-12-16 02:16:30.787 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" Namespace="calico-system" Pod="goldmane-666569f655-lddqc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lddqc-eth0" Dec 16 02:16:30.799974 systemd-resolved[1253]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 02:16:30.806000 audit[4604]: NETFILTER_CFG table=filter:141 family=2 entries=56 op=nft_register_chain pid=4604 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:30.806000 audit[4604]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28728 a0=3 a1=fffffc1bfbe0 a2=0 a3=ffff98b23fa8 items=0 ppid=4022 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.806000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:30.831543 containerd[1550]: time="2025-12-16T02:16:30.831502087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xd2kv,Uid:fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"730552c45923ca3ace35a1100ecd13df13183a17283526189c6b07207ae8b326\"" Dec 16 02:16:30.832872 containerd[1550]: time="2025-12-16T02:16:30.832826902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:16:30.861862 containerd[1550]: time="2025-12-16T02:16:30.861817112Z" level=info msg="connecting to shim e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798" address="unix:///run/containerd/s/c66975e7ef13e8b8d40db98b070b707a84f098267695c06589f6e427d115c316" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:30.902800 systemd[1]: Started cri-containerd-e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798.scope - libcontainer container e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798. Dec 16 02:16:30.911000 audit: BPF prog-id=241 op=LOAD Dec 16 02:16:30.911000 audit: BPF prog-id=242 op=LOAD Dec 16 02:16:30.911000 audit[4633]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537623164386364373533643239613933333963396431386630356539 Dec 16 02:16:30.911000 audit: BPF prog-id=242 op=UNLOAD Dec 16 02:16:30.911000 audit[4633]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.911000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537623164386364373533643239613933333963396431386630356539 Dec 16 02:16:30.912000 audit: BPF prog-id=243 op=LOAD Dec 16 02:16:30.912000 audit[4633]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537623164386364373533643239613933333963396431386630356539 Dec 16 02:16:30.912000 audit: BPF prog-id=244 op=LOAD Dec 16 02:16:30.912000 audit[4633]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537623164386364373533643239613933333963396431386630356539 Dec 16 02:16:30.912000 audit: BPF prog-id=244 op=UNLOAD Dec 16 02:16:30.912000 audit[4633]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537623164386364373533643239613933333963396431386630356539 Dec 16 02:16:30.912000 audit: BPF prog-id=243 op=UNLOAD Dec 16 02:16:30.912000 audit[4633]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537623164386364373533643239613933333963396431386630356539 Dec 16 02:16:30.913000 audit: BPF prog-id=245 op=LOAD Dec 16 02:16:30.913000 audit[4633]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4621 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:30.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537623164386364373533643239613933333963396431386630356539 Dec 16 02:16:30.915559 systemd-resolved[1253]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 02:16:30.937400 containerd[1550]: time="2025-12-16T02:16:30.937362930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lddqc,Uid:c7d46623-447d-4a3b-a433-802c6ce8e063,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7b1d8cd753d29a9339c9d18f05e9299e9e4723e4fa812f7cfad020b112cb798\"" Dec 16 02:16:30.995829 systemd-networkd[1468]: cali1a6eee03d0a: Gained IPv6LL Dec 16 02:16:31.051424 containerd[1550]: time="2025-12-16T02:16:31.051297384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:31.052425 containerd[1550]: time="2025-12-16T02:16:31.052382573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:16:31.052570 containerd[1550]: time="2025-12-16T02:16:31.052472902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:31.052740 kubelet[2693]: E1216 02:16:31.052683 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:16:31.052788 kubelet[2693]: E1216 02:16:31.052753 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:16:31.053135 kubelet[2693]: E1216 02:16:31.053089 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n67sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xd2kv_calico-system(fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:31.053281 containerd[1550]: time="2025-12-16T02:16:31.053164931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:16:31.258418 containerd[1550]: time="2025-12-16T02:16:31.258303304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:31.259345 containerd[1550]: time="2025-12-16T02:16:31.259309765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:16:31.259420 containerd[1550]: time="2025-12-16T02:16:31.259319406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:31.259613 kubelet[2693]: E1216 02:16:31.259550 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:16:31.259685 kubelet[2693]: E1216 02:16:31.259620 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:16:31.260075 kubelet[2693]: E1216 02:16:31.259985 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2z62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lddqc_calico-system(c7d46623-447d-4a3b-a433-802c6ce8e063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:31.260436 containerd[1550]: time="2025-12-16T02:16:31.260135848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:16:31.261318 kubelet[2693]: E1216 02:16:31.261282 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lddqc" podUID="c7d46623-447d-4a3b-a433-802c6ce8e063" Dec 16 02:16:31.499826 containerd[1550]: time="2025-12-16T02:16:31.499636554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:31.500956 containerd[1550]: time="2025-12-16T02:16:31.500849396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:16:31.500956 containerd[1550]: time="2025-12-16T02:16:31.500906521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:31.501133 kubelet[2693]: E1216 02:16:31.501077 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:16:31.501133 kubelet[2693]: E1216 02:16:31.501128 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:16:31.501296 kubelet[2693]: E1216 02:16:31.501246 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n67sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xd2kv_calico-system(fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:31.502771 kubelet[2693]: E1216 02:16:31.502699 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:31.521815 kubelet[2693]: E1216 02:16:31.521580 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:31.522579 containerd[1550]: time="2025-12-16T02:16:31.522115532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79dfb47d67-2sm7d,Uid:940a1e00-e3f0-45f9-b45b-33acce551ddd,Namespace:calico-apiserver,Attempt:0,}" Dec 16 02:16:31.522579 containerd[1550]: time="2025-12-16T02:16:31.522450606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86t5b,Uid:eca2cc20-8a5f-44b4-a022-1a39eef052f3,Namespace:kube-system,Attempt:0,}" Dec 16 02:16:31.653506 systemd-networkd[1468]: calif8dded6ce64: Link UP Dec 16 02:16:31.654345 systemd-networkd[1468]: calif8dded6ce64: Gained carrier Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.569 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0 calico-apiserver-79dfb47d67- calico-apiserver 940a1e00-e3f0-45f9-b45b-33acce551ddd 899 0 2025-12-16 02:16:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79dfb47d67 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79dfb47d67-2sm7d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif8dded6ce64 [] [] }} ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-2sm7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.569 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-2sm7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.600 [INFO][4687] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" HandleID="k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Workload="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.600 [INFO][4687] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" HandleID="k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Workload="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79dfb47d67-2sm7d", "timestamp":"2025-12-16 02:16:31.600107169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.600 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.600 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.600 [INFO][4687] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.611 [INFO][4687] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.621 [INFO][4687] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.627 [INFO][4687] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.629 [INFO][4687] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.632 [INFO][4687] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.632 [INFO][4687] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.634 [INFO][4687] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4 Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.638 [INFO][4687] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.646 [INFO][4687] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.646 [INFO][4687] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" host="localhost" Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.646 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:16:31.671790 containerd[1550]: 2025-12-16 02:16:31.646 [INFO][4687] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" HandleID="k8s-pod-network.5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Workload="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" Dec 16 02:16:31.673783 containerd[1550]: 2025-12-16 02:16:31.650 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-2sm7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0", GenerateName:"calico-apiserver-79dfb47d67-", Namespace:"calico-apiserver", SelfLink:"", UID:"940a1e00-e3f0-45f9-b45b-33acce551ddd", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79dfb47d67", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79dfb47d67-2sm7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8dded6ce64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:31.673783 containerd[1550]: 2025-12-16 02:16:31.651 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-2sm7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" Dec 16 02:16:31.673783 containerd[1550]: 2025-12-16 02:16:31.651 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8dded6ce64 ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-2sm7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" Dec 16 02:16:31.673783 containerd[1550]: 2025-12-16 02:16:31.653 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-2sm7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" Dec 16 02:16:31.673783 containerd[1550]: 2025-12-16 02:16:31.653 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-2sm7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0", GenerateName:"calico-apiserver-79dfb47d67-", Namespace:"calico-apiserver", SelfLink:"", UID:"940a1e00-e3f0-45f9-b45b-33acce551ddd", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79dfb47d67", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4", Pod:"calico-apiserver-79dfb47d67-2sm7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif8dded6ce64", MAC:"56:c4:bf:96:61:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:31.673783 containerd[1550]: 2025-12-16 02:16:31.666 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" Namespace="calico-apiserver" Pod="calico-apiserver-79dfb47d67-2sm7d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79dfb47d67--2sm7d-eth0" Dec 16 02:16:31.688000 audit[4713]: NETFILTER_CFG table=filter:142 family=2 entries=41 op=nft_register_chain pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:31.688000 audit[4713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23096 a0=3 a1=ffffcf477950 a2=0 a3=ffff91482fa8 items=0 ppid=4022 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.688000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:31.705842 containerd[1550]: time="2025-12-16T02:16:31.705791589Z" level=info msg="connecting to shim 5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4" address="unix:///run/containerd/s/9b7da5b1b2c6148ec6a36b162cc1b4cb4ae0bd2c5a85720ae0df89fdeebab414" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:31.754847 systemd[1]: Started cri-containerd-5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4.scope - libcontainer container 5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4. Dec 16 02:16:31.767125 kubelet[2693]: E1216 02:16:31.767050 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" podUID="7aeba5eb-89d7-4d56-af0a-38d8908b6a09" Dec 16 02:16:31.768511 kubelet[2693]: E1216 02:16:31.767773 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:31.780704 systemd-networkd[1468]: califc6754f2217: Link UP Dec 16 02:16:31.781915 systemd-networkd[1468]: califc6754f2217: Gained carrier Dec 16 02:16:31.787431 kubelet[2693]: E1216 02:16:31.787390 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lddqc" podUID="c7d46623-447d-4a3b-a433-802c6ce8e063" Dec 16 02:16:31.798000 audit: BPF prog-id=246 op=LOAD Dec 16 02:16:31.800000 audit: BPF prog-id=247 op=LOAD Dec 16 02:16:31.800000 audit[4735]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4723 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363164303739383136613366313864643338313665363232323233 Dec 16 02:16:31.801000 audit: BPF prog-id=247 op=UNLOAD Dec 16 02:16:31.801000 audit[4735]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4723 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363164303739383136613366313864643338313665363232323233 Dec 16 02:16:31.801000 audit: BPF prog-id=248 op=LOAD Dec 16 02:16:31.801000 audit[4735]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4723 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363164303739383136613366313864643338313665363232323233 Dec 16 02:16:31.801000 audit: BPF prog-id=249 op=LOAD Dec 16 02:16:31.801000 audit[4735]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4723 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363164303739383136613366313864643338313665363232323233 Dec 16 02:16:31.801000 audit: BPF prog-id=249 op=UNLOAD Dec 16 02:16:31.801000 audit[4735]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4723 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363164303739383136613366313864643338313665363232323233 Dec 16 02:16:31.801000 audit: BPF prog-id=248 op=UNLOAD Dec 16 02:16:31.801000 audit[4735]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4723 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363164303739383136613366313864643338313665363232323233 Dec 16 02:16:31.801000 audit: BPF prog-id=250 op=LOAD Dec 16 02:16:31.801000 audit[4735]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4723 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539363164303739383136613366313864643338313665363232323233 Dec 16 02:16:31.805046 systemd-resolved[1253]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.575 [INFO][4670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--86t5b-eth0 coredns-668d6bf9bc- kube-system eca2cc20-8a5f-44b4-a022-1a39eef052f3 901 0 2025-12-16 02:15:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-86t5b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc6754f2217 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Namespace="kube-system" Pod="coredns-668d6bf9bc-86t5b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86t5b-" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.576 [INFO][4670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Namespace="kube-system" Pod="coredns-668d6bf9bc-86t5b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.611 [INFO][4693] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" HandleID="k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Workload="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.612 [INFO][4693] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" HandleID="k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Workload="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-86t5b", "timestamp":"2025-12-16 02:16:31.611970481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.613 [INFO][4693] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.646 [INFO][4693] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.646 [INFO][4693] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.716 [INFO][4693] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.725 [INFO][4693] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.734 [INFO][4693] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.739 [INFO][4693] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.746 [INFO][4693] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.746 [INFO][4693] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.748 [INFO][4693] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.752 [INFO][4693] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.761 [INFO][4693] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.761 [INFO][4693] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" host="localhost" Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.761 [INFO][4693] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 02:16:31.808795 containerd[1550]: 2025-12-16 02:16:31.761 [INFO][4693] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" HandleID="k8s-pod-network.ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Workload="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" Dec 16 02:16:31.809532 containerd[1550]: 2025-12-16 02:16:31.770 [INFO][4670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Namespace="kube-system" Pod="coredns-668d6bf9bc-86t5b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--86t5b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eca2cc20-8a5f-44b4-a022-1a39eef052f3", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 15, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-86t5b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc6754f2217", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:31.809532 containerd[1550]: 2025-12-16 02:16:31.770 [INFO][4670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Namespace="kube-system" Pod="coredns-668d6bf9bc-86t5b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" Dec 16 02:16:31.809532 containerd[1550]: 2025-12-16 02:16:31.770 [INFO][4670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc6754f2217 ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Namespace="kube-system" Pod="coredns-668d6bf9bc-86t5b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" Dec 16 02:16:31.809532 containerd[1550]: 2025-12-16 02:16:31.784 [INFO][4670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Namespace="kube-system" Pod="coredns-668d6bf9bc-86t5b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" Dec 16 02:16:31.809532 containerd[1550]: 2025-12-16 02:16:31.789 [INFO][4670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Namespace="kube-system" Pod="coredns-668d6bf9bc-86t5b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--86t5b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eca2cc20-8a5f-44b4-a022-1a39eef052f3", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 2, 15, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e", Pod:"coredns-668d6bf9bc-86t5b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc6754f2217", MAC:"9a:d4:f9:c3:fa:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 02:16:31.809532 containerd[1550]: 2025-12-16 02:16:31.804 [INFO][4670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" Namespace="kube-system" Pod="coredns-668d6bf9bc-86t5b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--86t5b-eth0" Dec 16 02:16:31.810000 audit[4758]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:31.810000 audit[4758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd6ef9a60 a2=0 a3=1 items=0 ppid=2803 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.810000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:31.816000 audit[4758]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=4758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:31.816000 audit[4758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd6ef9a60 a2=0 a3=1 items=0 ppid=2803 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:31.835000 audit[4772]: NETFILTER_CFG table=filter:145 family=2 entries=40 op=nft_register_chain pid=4772 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 02:16:31.835000 audit[4772]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20368 a0=3 a1=ffffe76cccc0 a2=0 a3=ffff975dcfa8 items=0 ppid=4022 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.835000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 02:16:31.840095 containerd[1550]: time="2025-12-16T02:16:31.839943908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79dfb47d67-2sm7d,Uid:940a1e00-e3f0-45f9-b45b-33acce551ddd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5961d079816a3f18dd3816e622223a029f5a6555f8daf90a13d5a6bc9bb497c4\"" Dec 16 02:16:31.842577 containerd[1550]: time="2025-12-16T02:16:31.842538329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:16:31.844651 containerd[1550]: time="2025-12-16T02:16:31.844612058Z" level=info msg="connecting to shim ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e" address="unix:///run/containerd/s/e3c061ac4b20c78defba8e5058b564862063f9c3bfd65538b8e0e6512a9328ec" namespace=k8s.io protocol=ttrpc version=3 Dec 16 02:16:31.880829 systemd[1]: Started cri-containerd-ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e.scope - libcontainer container ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e. Dec 16 02:16:31.891000 audit: BPF prog-id=251 op=LOAD Dec 16 02:16:31.892000 audit: BPF prog-id=252 op=LOAD Dec 16 02:16:31.892000 audit[4793]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4782 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361373035646562643737636339633665323534346432363531623039 Dec 16 02:16:31.892000 audit: BPF prog-id=252 op=UNLOAD Dec 16 02:16:31.892000 audit[4793]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361373035646562643737636339633665323534346432363531623039 Dec 16 02:16:31.892000 audit: BPF prog-id=253 op=LOAD Dec 16 02:16:31.892000 audit[4793]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4782 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361373035646562643737636339633665323534346432363531623039 Dec 16 02:16:31.892000 audit: BPF prog-id=254 op=LOAD Dec 16 02:16:31.892000 audit[4793]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4782 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361373035646562643737636339633665323534346432363531623039 Dec 16 02:16:31.892000 audit: BPF prog-id=254 op=UNLOAD Dec 16 02:16:31.892000 audit[4793]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361373035646562643737636339633665323534346432363531623039 Dec 16 02:16:31.892000 audit: BPF prog-id=253 op=UNLOAD Dec 16 02:16:31.892000 audit[4793]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361373035646562643737636339633665323534346432363531623039 Dec 16 02:16:31.892000 audit: BPF prog-id=255 op=LOAD Dec 16 02:16:31.892000 audit[4793]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4782 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361373035646562643737636339633665323534346432363531623039 Dec 16 02:16:31.893862 systemd-resolved[1253]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 02:16:31.914220 containerd[1550]: time="2025-12-16T02:16:31.914155005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-86t5b,Uid:eca2cc20-8a5f-44b4-a022-1a39eef052f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e\"" Dec 16 02:16:31.915069 kubelet[2693]: E1216 02:16:31.915042 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:31.917237 containerd[1550]: time="2025-12-16T02:16:31.917110982Z" level=info msg="CreateContainer within sandbox \"ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 02:16:31.930944 containerd[1550]: time="2025-12-16T02:16:31.930652143Z" level=info msg="Container bc3931cc9af74440196f61d9360e7a078bb157875793aee5a59772260981ee5a: CDI devices from CRI Config.CDIDevices: []" Dec 16 02:16:31.936502 containerd[1550]: time="2025-12-16T02:16:31.936439885Z" level=info msg="CreateContainer within sandbox \"ca705debd77cc9c6e2544d2651b09c035b813d7269720e29b08d8ab8c4ea765e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc3931cc9af74440196f61d9360e7a078bb157875793aee5a59772260981ee5a\"" Dec 16 02:16:31.937072 containerd[1550]: time="2025-12-16T02:16:31.937043865Z" level=info msg="StartContainer for \"bc3931cc9af74440196f61d9360e7a078bb157875793aee5a59772260981ee5a\"" Dec 16 02:16:31.939115 containerd[1550]: time="2025-12-16T02:16:31.939072469Z" level=info msg="connecting to shim bc3931cc9af74440196f61d9360e7a078bb157875793aee5a59772260981ee5a" address="unix:///run/containerd/s/e3c061ac4b20c78defba8e5058b564862063f9c3bfd65538b8e0e6512a9328ec" protocol=ttrpc version=3 Dec 16 02:16:31.964843 systemd[1]: Started cri-containerd-bc3931cc9af74440196f61d9360e7a078bb157875793aee5a59772260981ee5a.scope - libcontainer container bc3931cc9af74440196f61d9360e7a078bb157875793aee5a59772260981ee5a. Dec 16 02:16:31.976000 audit: BPF prog-id=256 op=LOAD Dec 16 02:16:31.977000 audit: BPF prog-id=257 op=LOAD Dec 16 02:16:31.977000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4782 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333933316363396166373434343031393666363164393336306537 Dec 16 02:16:31.977000 audit: BPF prog-id=257 op=UNLOAD Dec 16 02:16:31.977000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333933316363396166373434343031393666363164393336306537 Dec 16 02:16:31.977000 audit: BPF prog-id=258 op=LOAD Dec 16 02:16:31.977000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4782 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333933316363396166373434343031393666363164393336306537 Dec 16 02:16:31.977000 audit: BPF prog-id=259 op=LOAD Dec 16 02:16:31.977000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4782 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333933316363396166373434343031393666363164393336306537 Dec 16 02:16:31.977000 audit: BPF prog-id=259 op=UNLOAD Dec 16 02:16:31.977000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333933316363396166373434343031393666363164393336306537 Dec 16 02:16:31.977000 audit: BPF prog-id=258 op=UNLOAD Dec 16 02:16:31.977000 audit[4818]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4782 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333933316363396166373434343031393666363164393336306537 Dec 16 02:16:31.977000 audit: BPF prog-id=260 op=LOAD Dec 16 02:16:31.977000 audit[4818]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4782 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:31.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333933316363396166373434343031393666363164393336306537 Dec 16 02:16:32.000277 containerd[1550]: time="2025-12-16T02:16:32.000238495Z" level=info msg="StartContainer for \"bc3931cc9af74440196f61d9360e7a078bb157875793aee5a59772260981ee5a\" returns successfully" Dec 16 02:16:32.049742 containerd[1550]: time="2025-12-16T02:16:32.049622368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:32.051499 containerd[1550]: time="2025-12-16T02:16:32.051418705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:16:32.051577 containerd[1550]: time="2025-12-16T02:16:32.051514474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:32.052187 kubelet[2693]: E1216 02:16:32.051665 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:16:32.052187 kubelet[2693]: E1216 02:16:32.051730 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:16:32.052187 kubelet[2693]: E1216 02:16:32.051871 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d89cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79dfb47d67-2sm7d_calico-apiserver(940a1e00-e3f0-45f9-b45b-33acce551ddd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:32.053543 kubelet[2693]: E1216 02:16:32.053341 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" podUID="940a1e00-e3f0-45f9-b45b-33acce551ddd" Dec 16 02:16:32.275709 systemd-networkd[1468]: calic66591853c1: Gained IPv6LL Dec 16 02:16:32.276250 systemd-networkd[1468]: cali6ee078dad5a: Gained IPv6LL Dec 16 02:16:32.627352 kubelet[2693]: I1216 02:16:32.627294 2693 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 02:16:32.627861 kubelet[2693]: E1216 02:16:32.627843 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:32.772336 kubelet[2693]: E1216 02:16:32.772283 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:32.773278 kubelet[2693]: E1216 02:16:32.772485 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:32.773419 kubelet[2693]: E1216 02:16:32.772862 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" podUID="940a1e00-e3f0-45f9-b45b-33acce551ddd" Dec 16 02:16:32.773899 kubelet[2693]: E1216 02:16:32.772963 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lddqc" podUID="c7d46623-447d-4a3b-a433-802c6ce8e063" Dec 16 02:16:32.773899 kubelet[2693]: E1216 02:16:32.773796 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:32.806000 audit[4904]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=4904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:32.810488 kernel: kauditd_printk_skb: 244 callbacks suppressed Dec 16 02:16:32.810724 kernel: audit: type=1325 audit(1765851392.806:769): table=filter:146 family=2 entries=14 op=nft_register_rule pid=4904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:32.810834 kernel: audit: type=1300 audit(1765851392.806:769): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc5d5dfd0 a2=0 a3=1 items=0 ppid=2803 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:32.806000 audit[4904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc5d5dfd0 a2=0 a3=1 items=0 ppid=2803 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:32.815938 kubelet[2693]: I1216 02:16:32.815878 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-86t5b" podStartSLOduration=42.815860279 podStartE2EDuration="42.815860279s" podCreationTimestamp="2025-12-16 02:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 02:16:32.815413355 +0000 UTC m=+49.376564098" watchObservedRunningTime="2025-12-16 02:16:32.815860279 +0000 UTC m=+49.377011022" Dec 16 02:16:32.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:32.817812 kernel: audit: type=1327 audit(1765851392.806:769): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:32.822000 audit[4904]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=4904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:32.822000 audit[4904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc5d5dfd0 a2=0 a3=1 items=0 ppid=2803 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:32.830405 kernel: audit: type=1325 audit(1765851392.822:770): table=nat:147 family=2 entries=20 op=nft_register_rule pid=4904 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:32.830470 kernel: audit: type=1300 audit(1765851392.822:770): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc5d5dfd0 a2=0 a3=1 items=0 ppid=2803 pid=4904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:32.822000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:32.833411 kernel: audit: type=1327 audit(1765851392.822:770): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:32.844000 audit[4906]: NETFILTER_CFG table=filter:148 family=2 entries=14 op=nft_register_rule pid=4906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:32.844000 audit[4906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcbe81640 a2=0 a3=1 items=0 ppid=2803 pid=4906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:32.851441 kernel: audit: type=1325 audit(1765851392.844:771): table=filter:148 family=2 entries=14 op=nft_register_rule pid=4906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:32.851513 kernel: audit: type=1300 audit(1765851392.844:771): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcbe81640 a2=0 a3=1 items=0 ppid=2803 pid=4906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:32.851536 kernel: audit: type=1327 audit(1765851392.844:771): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:32.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:32.853525 systemd-networkd[1468]: califc6754f2217: Gained IPv6LL Dec 16 02:16:32.861000 audit[4906]: NETFILTER_CFG table=nat:149 family=2 entries=56 op=nft_register_chain pid=4906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:32.861000 audit[4906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcbe81640 a2=0 a3=1 items=0 ppid=2803 pid=4906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:32.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:32.864607 kernel: audit: type=1325 audit(1765851392.861:772): table=nat:149 family=2 entries=56 op=nft_register_chain pid=4906 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:32.979829 systemd-networkd[1468]: calif8dded6ce64: Gained IPv6LL Dec 16 02:16:33.030819 systemd[1]: Started sshd@9-10.0.0.94:22-10.0.0.1:45206.service - OpenSSH per-connection server daemon (10.0.0.1:45206). Dec 16 02:16:33.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.94:22-10.0.0.1:45206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:33.091000 audit[4909]: USER_ACCT pid=4909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.092360 sshd[4909]: Accepted publickey for core from 10.0.0.1 port 45206 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:33.092000 audit[4909]: CRED_ACQ pid=4909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.092000 audit[4909]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc326bdb0 a2=3 a3=0 items=0 ppid=1 pid=4909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:33.092000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:33.094092 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:33.098662 systemd-logind[1526]: New session 11 of user core. Dec 16 02:16:33.108833 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 02:16:33.111000 audit[4909]: USER_START pid=4909 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.112000 audit[4913]: CRED_ACQ pid=4913 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.209421 sshd[4913]: Connection closed by 10.0.0.1 port 45206 Dec 16 02:16:33.209784 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:33.210000 audit[4909]: USER_END pid=4909 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.210000 audit[4909]: CRED_DISP pid=4909 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.224661 systemd[1]: sshd@9-10.0.0.94:22-10.0.0.1:45206.service: Deactivated successfully. Dec 16 02:16:33.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.94:22-10.0.0.1:45206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:33.226534 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 02:16:33.227487 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Dec 16 02:16:33.229988 systemd[1]: Started sshd@10-10.0.0.94:22-10.0.0.1:45212.service - OpenSSH per-connection server daemon (10.0.0.1:45212). Dec 16 02:16:33.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.94:22-10.0.0.1:45212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:33.231355 systemd-logind[1526]: Removed session 11. Dec 16 02:16:33.288000 audit[4927]: USER_ACCT pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.289000 audit[4927]: CRED_ACQ pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.289000 audit[4927]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe69a8e50 a2=3 a3=0 items=0 ppid=1 pid=4927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:33.289000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:33.291953 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 45212 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:33.292064 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:33.302555 systemd-logind[1526]: New session 12 of user core. Dec 16 02:16:33.316867 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 02:16:33.318000 audit[4927]: USER_START pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.320000 audit[4937]: CRED_ACQ pid=4937 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.506712 sshd[4937]: Connection closed by 10.0.0.1 port 45212 Dec 16 02:16:33.506494 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:33.509000 audit[4927]: USER_END pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.509000 audit[4927]: CRED_DISP pid=4927 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.525433 systemd[1]: sshd@10-10.0.0.94:22-10.0.0.1:45212.service: Deactivated successfully. Dec 16 02:16:33.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.94:22-10.0.0.1:45212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:33.528250 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 02:16:33.530009 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Dec 16 02:16:33.535730 systemd[1]: Started sshd@11-10.0.0.94:22-10.0.0.1:45224.service - OpenSSH per-connection server daemon (10.0.0.1:45224). Dec 16 02:16:33.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.94:22-10.0.0.1:45224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:33.537360 systemd-logind[1526]: Removed session 12. Dec 16 02:16:33.607000 audit[4949]: USER_ACCT pid=4949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.609007 sshd[4949]: Accepted publickey for core from 10.0.0.1 port 45224 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:33.609000 audit[4949]: CRED_ACQ pid=4949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.609000 audit[4949]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6943ca0 a2=3 a3=0 items=0 ppid=1 pid=4949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:33.609000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:33.610687 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:33.616007 systemd-logind[1526]: New session 13 of user core. Dec 16 02:16:33.626866 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 02:16:33.628000 audit[4949]: USER_START pid=4949 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.630000 audit[4956]: CRED_ACQ pid=4956 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.756648 sshd[4956]: Connection closed by 10.0.0.1 port 45224 Dec 16 02:16:33.756431 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:33.758000 audit[4949]: USER_END pid=4949 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.758000 audit[4949]: CRED_DISP pid=4949 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:33.761728 systemd[1]: sshd@11-10.0.0.94:22-10.0.0.1:45224.service: Deactivated successfully. Dec 16 02:16:33.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.94:22-10.0.0.1:45224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:33.765314 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 02:16:33.767290 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Dec 16 02:16:33.770102 systemd-logind[1526]: Removed session 13. Dec 16 02:16:33.773053 kubelet[2693]: E1216 02:16:33.773026 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:33.774597 kubelet[2693]: E1216 02:16:33.774539 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" podUID="940a1e00-e3f0-45f9-b45b-33acce551ddd" Dec 16 02:16:34.774601 kubelet[2693]: E1216 02:16:34.774428 2693 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 02:16:37.524459 containerd[1550]: time="2025-12-16T02:16:37.524404645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 02:16:37.746147 containerd[1550]: time="2025-12-16T02:16:37.746101679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:37.747071 containerd[1550]: time="2025-12-16T02:16:37.747032803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 02:16:37.747157 containerd[1550]: time="2025-12-16T02:16:37.747122011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:37.747324 kubelet[2693]: E1216 02:16:37.747289 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:16:37.747631 kubelet[2693]: E1216 02:16:37.747336 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 02:16:37.747631 kubelet[2693]: E1216 02:16:37.747443 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f0d1ca7912f0477b87dcb1de770b774b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6kq48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd49d4685-v8mf6_calico-system(0f118220-9d7c-4c48-a0bc-35415c01901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:37.749700 containerd[1550]: time="2025-12-16T02:16:37.749568634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 02:16:37.987801 containerd[1550]: time="2025-12-16T02:16:37.987742249Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:37.988789 containerd[1550]: time="2025-12-16T02:16:37.988715177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 02:16:37.988979 containerd[1550]: time="2025-12-16T02:16:37.988756261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:37.989075 kubelet[2693]: E1216 02:16:37.988941 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:16:37.989219 kubelet[2693]: E1216 02:16:37.989150 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 02:16:37.989406 kubelet[2693]: E1216 02:16:37.989349 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kq48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cd49d4685-v8mf6_calico-system(0f118220-9d7c-4c48-a0bc-35415c01901e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:37.990569 kubelet[2693]: E1216 02:16:37.990529 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd49d4685-v8mf6" podUID="0f118220-9d7c-4c48-a0bc-35415c01901e" Dec 16 02:16:38.772047 systemd[1]: Started sshd@12-10.0.0.94:22-10.0.0.1:45226.service - OpenSSH per-connection server daemon (10.0.0.1:45226). Dec 16 02:16:38.773672 kernel: kauditd_printk_skb: 35 callbacks suppressed Dec 16 02:16:38.773755 kernel: audit: type=1130 audit(1765851398.770:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.94:22-10.0.0.1:45226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:38.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.94:22-10.0.0.1:45226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:38.828000 audit[4974]: USER_ACCT pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.829996 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 45226 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:38.832556 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:38.830000 audit[4974]: CRED_ACQ pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.836530 kernel: audit: type=1101 audit(1765851398.828:801): pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.836673 kernel: audit: type=1103 audit(1765851398.830:802): pid=4974 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.836699 kernel: audit: type=1006 audit(1765851398.830:803): pid=4974 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 02:16:38.830000 audit[4974]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe51c9740 a2=3 a3=0 items=0 ppid=1 pid=4974 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:38.839355 systemd-logind[1526]: New session 14 of user core. Dec 16 02:16:38.841765 kernel: audit: type=1300 audit(1765851398.830:803): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe51c9740 a2=3 a3=0 items=0 ppid=1 pid=4974 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:38.841832 kernel: audit: type=1327 audit(1765851398.830:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:38.830000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:38.851780 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 02:16:38.852000 audit[4974]: USER_START pid=4974 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.853000 audit[4980]: CRED_ACQ pid=4980 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.860338 kernel: audit: type=1105 audit(1765851398.852:804): pid=4974 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.860407 kernel: audit: type=1103 audit(1765851398.853:805): pid=4980 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.961785 sshd[4980]: Connection closed by 10.0.0.1 port 45226 Dec 16 02:16:38.962141 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:38.962000 audit[4974]: USER_END pid=4974 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.962000 audit[4974]: CRED_DISP pid=4974 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.970984 kernel: audit: type=1106 audit(1765851398.962:806): pid=4974 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.971056 kernel: audit: type=1104 audit(1765851398.962:807): pid=4974 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:38.980012 systemd[1]: sshd@12-10.0.0.94:22-10.0.0.1:45226.service: Deactivated successfully. Dec 16 02:16:38.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.94:22-10.0.0.1:45226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:38.981641 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 02:16:38.982282 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Dec 16 02:16:38.986911 systemd[1]: Started sshd@13-10.0.0.94:22-10.0.0.1:45230.service - OpenSSH per-connection server daemon (10.0.0.1:45230). Dec 16 02:16:38.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.94:22-10.0.0.1:45230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:38.987679 systemd-logind[1526]: Removed session 14. Dec 16 02:16:39.050000 audit[4993]: USER_ACCT pid=4993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.052879 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 45230 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:39.052000 audit[4993]: CRED_ACQ pid=4993 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.052000 audit[4993]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca12abb0 a2=3 a3=0 items=0 ppid=1 pid=4993 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:39.052000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:39.054613 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:39.060696 systemd-logind[1526]: New session 15 of user core. Dec 16 02:16:39.069800 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 02:16:39.070000 audit[4993]: USER_START pid=4993 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.072000 audit[4997]: CRED_ACQ pid=4997 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.223461 sshd[4997]: Connection closed by 10.0.0.1 port 45230 Dec 16 02:16:39.223992 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:39.224000 audit[4993]: USER_END pid=4993 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.224000 audit[4993]: CRED_DISP pid=4993 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.235212 systemd[1]: sshd@13-10.0.0.94:22-10.0.0.1:45230.service: Deactivated successfully. Dec 16 02:16:39.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.94:22-10.0.0.1:45230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:39.237168 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 02:16:39.237948 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Dec 16 02:16:39.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.94:22-10.0.0.1:45234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:39.240920 systemd[1]: Started sshd@14-10.0.0.94:22-10.0.0.1:45234.service - OpenSSH per-connection server daemon (10.0.0.1:45234). Dec 16 02:16:39.241471 systemd-logind[1526]: Removed session 15. Dec 16 02:16:39.305000 audit[5008]: USER_ACCT pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.307280 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 45234 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:39.306000 audit[5008]: CRED_ACQ pid=5008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.307000 audit[5008]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5fc4e70 a2=3 a3=0 items=0 ppid=1 pid=5008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:39.307000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:39.309476 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:39.314240 systemd-logind[1526]: New session 16 of user core. Dec 16 02:16:39.319780 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 02:16:39.320000 audit[5008]: USER_START pid=5008 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.322000 audit[5012]: CRED_ACQ pid=5012 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.839000 audit[5028]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:39.839000 audit[5028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffde050bf0 a2=0 a3=1 items=0 ppid=2803 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:39.839000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:39.850000 audit[5028]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:39.850000 audit[5028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffde050bf0 a2=0 a3=1 items=0 ppid=2803 pid=5028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:39.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:39.853620 sshd[5012]: Connection closed by 10.0.0.1 port 45234 Dec 16 02:16:39.854198 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:39.854000 audit[5008]: USER_END pid=5008 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.855000 audit[5008]: CRED_DISP pid=5008 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.868528 systemd[1]: sshd@14-10.0.0.94:22-10.0.0.1:45234.service: Deactivated successfully. Dec 16 02:16:39.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.94:22-10.0.0.1:45234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:39.872906 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 02:16:39.872000 audit[5031]: NETFILTER_CFG table=filter:152 family=2 entries=38 op=nft_register_rule pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:39.872000 audit[5031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffd995630 a2=0 a3=1 items=0 ppid=2803 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:39.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:39.875763 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Dec 16 02:16:39.877000 audit[5031]: NETFILTER_CFG table=nat:153 family=2 entries=20 op=nft_register_rule pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:39.877000 audit[5031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffd995630 a2=0 a3=1 items=0 ppid=2803 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:39.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:39.881972 systemd[1]: Started sshd@15-10.0.0.94:22-10.0.0.1:45248.service - OpenSSH per-connection server daemon (10.0.0.1:45248). Dec 16 02:16:39.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.94:22-10.0.0.1:45248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:39.882779 systemd-logind[1526]: Removed session 16. Dec 16 02:16:39.937000 audit[5035]: USER_ACCT pid=5035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.939491 sshd[5035]: Accepted publickey for core from 10.0.0.1 port 45248 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:39.938000 audit[5035]: CRED_ACQ pid=5035 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.938000 audit[5035]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd776110 a2=3 a3=0 items=0 ppid=1 pid=5035 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:39.938000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:39.941215 sshd-session[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:39.946166 systemd-logind[1526]: New session 17 of user core. Dec 16 02:16:39.956797 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 02:16:39.957000 audit[5035]: USER_START pid=5035 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:39.958000 audit[5039]: CRED_ACQ pid=5039 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.222441 sshd[5039]: Connection closed by 10.0.0.1 port 45248 Dec 16 02:16:40.222801 sshd-session[5035]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:40.225000 audit[5035]: USER_END pid=5035 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.225000 audit[5035]: CRED_DISP pid=5035 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.232083 systemd[1]: sshd@15-10.0.0.94:22-10.0.0.1:45248.service: Deactivated successfully. Dec 16 02:16:40.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.94:22-10.0.0.1:45248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:40.235557 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 02:16:40.236756 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Dec 16 02:16:40.242925 systemd[1]: Started sshd@16-10.0.0.94:22-10.0.0.1:45264.service - OpenSSH per-connection server daemon (10.0.0.1:45264). Dec 16 02:16:40.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.94:22-10.0.0.1:45264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:40.245892 systemd-logind[1526]: Removed session 17. Dec 16 02:16:40.300000 audit[5050]: USER_ACCT pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.302406 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 45264 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:40.302000 audit[5050]: CRED_ACQ pid=5050 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.302000 audit[5050]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd1ba9b00 a2=3 a3=0 items=0 ppid=1 pid=5050 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:40.302000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:40.304763 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:40.310256 systemd-logind[1526]: New session 18 of user core. Dec 16 02:16:40.317800 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 02:16:40.318000 audit[5050]: USER_START pid=5050 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.320000 audit[5054]: CRED_ACQ pid=5054 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.414679 sshd[5054]: Connection closed by 10.0.0.1 port 45264 Dec 16 02:16:40.414957 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:40.414000 audit[5050]: USER_END pid=5050 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.414000 audit[5050]: CRED_DISP pid=5050 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:40.418827 systemd[1]: sshd@16-10.0.0.94:22-10.0.0.1:45264.service: Deactivated successfully. Dec 16 02:16:40.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.94:22-10.0.0.1:45264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:40.420804 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 02:16:40.421534 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Dec 16 02:16:40.422392 systemd-logind[1526]: Removed session 18. Dec 16 02:16:42.522080 containerd[1550]: time="2025-12-16T02:16:42.521564906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 02:16:42.730497 containerd[1550]: time="2025-12-16T02:16:42.730434907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:42.731416 containerd[1550]: time="2025-12-16T02:16:42.731381468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 02:16:42.731505 containerd[1550]: time="2025-12-16T02:16:42.731467995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:42.731698 kubelet[2693]: E1216 02:16:42.731655 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:16:42.732005 kubelet[2693]: E1216 02:16:42.731702 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 02:16:42.732005 kubelet[2693]: E1216 02:16:42.731955 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pscw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-754cc876f4-x89dv_calico-system(7aeba5eb-89d7-4d56-af0a-38d8908b6a09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:42.732905 containerd[1550]: time="2025-12-16T02:16:42.732383553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:16:42.733944 kubelet[2693]: E1216 02:16:42.733778 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" podUID="7aeba5eb-89d7-4d56-af0a-38d8908b6a09" Dec 16 02:16:42.934408 containerd[1550]: time="2025-12-16T02:16:42.934269876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:42.946860 containerd[1550]: time="2025-12-16T02:16:42.946411756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:16:42.946860 containerd[1550]: time="2025-12-16T02:16:42.946489642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:42.947017 kubelet[2693]: E1216 02:16:42.946675 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:16:42.947017 kubelet[2693]: E1216 02:16:42.946720 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:16:42.947017 kubelet[2693]: E1216 02:16:42.946842 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jw7zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79dfb47d67-g8xrk_calico-apiserver(1740e8a2-cfc1-49fa-aafb-3ebfadc1402f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:42.949133 kubelet[2693]: E1216 02:16:42.948213 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" podUID="1740e8a2-cfc1-49fa-aafb-3ebfadc1402f" Dec 16 02:16:44.521871 containerd[1550]: time="2025-12-16T02:16:44.521820656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 02:16:44.733818 containerd[1550]: time="2025-12-16T02:16:44.733745989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:44.734791 containerd[1550]: time="2025-12-16T02:16:44.734740232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 02:16:44.734877 containerd[1550]: time="2025-12-16T02:16:44.734809318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:44.734980 kubelet[2693]: E1216 02:16:44.734933 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:16:44.735441 kubelet[2693]: E1216 02:16:44.734988 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 02:16:44.738225 kubelet[2693]: E1216 02:16:44.738167 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d89cp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79dfb47d67-2sm7d_calico-apiserver(940a1e00-e3f0-45f9-b45b-33acce551ddd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:44.739701 kubelet[2693]: E1216 02:16:44.739670 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" podUID="940a1e00-e3f0-45f9-b45b-33acce551ddd" Dec 16 02:16:45.229000 audit[5077]: NETFILTER_CFG table=filter:154 family=2 entries=26 op=nft_register_rule pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:45.233374 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 02:16:45.233479 kernel: audit: type=1325 audit(1765851405.229:849): table=filter:154 family=2 entries=26 op=nft_register_rule pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:45.229000 audit[5077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdd99c530 a2=0 a3=1 items=0 ppid=2803 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:45.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:45.239203 kernel: audit: type=1300 audit(1765851405.229:849): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdd99c530 a2=0 a3=1 items=0 ppid=2803 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:45.239271 kernel: audit: type=1327 audit(1765851405.229:849): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:45.235000 audit[5077]: NETFILTER_CFG table=nat:155 family=2 entries=104 op=nft_register_chain pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:45.241249 kernel: audit: type=1325 audit(1765851405.235:850): table=nat:155 family=2 entries=104 op=nft_register_chain pid=5077 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 02:16:45.241290 kernel: audit: type=1300 audit(1765851405.235:850): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffdd99c530 a2=0 a3=1 items=0 ppid=2803 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:45.235000 audit[5077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffdd99c530 a2=0 a3=1 items=0 ppid=2803 pid=5077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:45.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:45.246767 kernel: audit: type=1327 audit(1765851405.235:850): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 02:16:45.427641 systemd[1]: Started sshd@17-10.0.0.94:22-10.0.0.1:37760.service - OpenSSH per-connection server daemon (10.0.0.1:37760). Dec 16 02:16:45.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.94:22-10.0.0.1:37760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:45.431634 kernel: audit: type=1130 audit(1765851405.427:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.94:22-10.0.0.1:37760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:45.494000 audit[5079]: USER_ACCT pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:45.496060 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 37760 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:45.498092 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:45.496000 audit[5079]: CRED_ACQ pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:45.502053 kernel: audit: type=1101 audit(1765851405.494:852): pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:45.502119 kernel: audit: type=1103 audit(1765851405.496:853): pid=5079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:45.502986 kernel: audit: type=1006 audit(1765851405.496:854): pid=5079 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 02:16:45.496000 audit[5079]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff86cb7e0 a2=3 a3=0 items=0 ppid=1 pid=5079 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:45.496000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:45.507048 systemd-logind[1526]: New session 19 of user core. Dec 16 02:16:45.517818 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 02:16:45.519000 audit[5079]: USER_START pid=5079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:45.522939 containerd[1550]: time="2025-12-16T02:16:45.522888237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 02:16:45.522000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:45.637737 sshd[5083]: Connection closed by 10.0.0.1 port 37760 Dec 16 02:16:45.638180 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:45.638000 audit[5079]: USER_END pid=5079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:45.638000 audit[5079]: CRED_DISP pid=5079 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:45.642328 systemd[1]: sshd@17-10.0.0.94:22-10.0.0.1:37760.service: Deactivated successfully. Dec 16 02:16:45.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.94:22-10.0.0.1:37760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:45.646033 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 02:16:45.647454 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Dec 16 02:16:45.648526 systemd-logind[1526]: Removed session 19. Dec 16 02:16:45.740234 containerd[1550]: time="2025-12-16T02:16:45.740179489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:45.741240 containerd[1550]: time="2025-12-16T02:16:45.741190653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 02:16:45.741358 containerd[1550]: time="2025-12-16T02:16:45.741268099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:45.741467 kubelet[2693]: E1216 02:16:45.741429 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:16:45.741780 kubelet[2693]: E1216 02:16:45.741478 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 02:16:45.741780 kubelet[2693]: E1216 02:16:45.741648 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n67sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xd2kv_calico-system(fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:45.744891 containerd[1550]: time="2025-12-16T02:16:45.744855117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 02:16:45.942853 containerd[1550]: time="2025-12-16T02:16:45.942679751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:45.944079 containerd[1550]: time="2025-12-16T02:16:45.944039584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 02:16:45.944153 containerd[1550]: time="2025-12-16T02:16:45.944077427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:45.944288 kubelet[2693]: E1216 02:16:45.944250 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:16:45.944367 kubelet[2693]: E1216 02:16:45.944299 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 02:16:45.944443 kubelet[2693]: E1216 02:16:45.944410 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n67sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xd2kv_calico-system(fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:45.945700 kubelet[2693]: E1216 02:16:45.945650 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:16:48.524033 containerd[1550]: time="2025-12-16T02:16:48.523986844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 02:16:48.738623 containerd[1550]: time="2025-12-16T02:16:48.738510818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 02:16:48.739798 containerd[1550]: time="2025-12-16T02:16:48.739743758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 02:16:48.740033 containerd[1550]: time="2025-12-16T02:16:48.739902051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 02:16:48.740239 kubelet[2693]: E1216 02:16:48.740183 2693 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:16:48.740532 kubelet[2693]: E1216 02:16:48.740246 2693 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 02:16:48.740532 kubelet[2693]: E1216 02:16:48.740394 2693 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2z62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lddqc_calico-system(c7d46623-447d-4a3b-a433-802c6ce8e063): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 02:16:48.741828 kubelet[2693]: E1216 02:16:48.741651 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lddqc" podUID="c7d46623-447d-4a3b-a433-802c6ce8e063" Dec 16 02:16:50.523992 kubelet[2693]: E1216 02:16:50.523913 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cd49d4685-v8mf6" podUID="0f118220-9d7c-4c48-a0bc-35415c01901e" Dec 16 02:16:50.653654 systemd[1]: Started sshd@18-10.0.0.94:22-10.0.0.1:37770.service - OpenSSH per-connection server daemon (10.0.0.1:37770). Dec 16 02:16:50.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.94:22-10.0.0.1:37770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:50.657400 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 02:16:50.657510 kernel: audit: type=1130 audit(1765851410.653:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.94:22-10.0.0.1:37770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:50.750000 audit[5097]: USER_ACCT pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.751252 sshd[5097]: Accepted publickey for core from 10.0.0.1 port 37770 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:50.754000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.756386 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:50.758063 kernel: audit: type=1101 audit(1765851410.750:861): pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.758237 kernel: audit: type=1103 audit(1765851410.754:862): pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.761076 kernel: audit: type=1006 audit(1765851410.754:863): pid=5097 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 02:16:50.754000 audit[5097]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffefff820 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:50.766975 kernel: audit: type=1300 audit(1765851410.754:863): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffefff820 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:50.754000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:50.768816 kernel: audit: type=1327 audit(1765851410.754:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:50.770149 systemd-logind[1526]: New session 20 of user core. Dec 16 02:16:50.773814 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 02:16:50.777000 audit[5097]: USER_START pid=5097 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.782000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.786737 kernel: audit: type=1105 audit(1765851410.777:864): pid=5097 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.786826 kernel: audit: type=1103 audit(1765851410.782:865): pid=5101 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.943767 sshd[5101]: Connection closed by 10.0.0.1 port 37770 Dec 16 02:16:50.944580 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:50.945000 audit[5097]: USER_END pid=5097 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.949273 systemd-logind[1526]: Session 20 logged out. Waiting for processes to exit. Dec 16 02:16:50.949518 systemd[1]: sshd@18-10.0.0.94:22-10.0.0.1:37770.service: Deactivated successfully. Dec 16 02:16:50.945000 audit[5097]: CRED_DISP pid=5097 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.953291 kernel: audit: type=1106 audit(1765851410.945:866): pid=5097 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.953347 kernel: audit: type=1104 audit(1765851410.945:867): pid=5097 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:50.953374 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 02:16:50.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.94:22-10.0.0.1:37770 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:50.955812 systemd-logind[1526]: Removed session 20. Dec 16 02:16:55.522802 kubelet[2693]: E1216 02:16:55.522685 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-g8xrk" podUID="1740e8a2-cfc1-49fa-aafb-3ebfadc1402f" Dec 16 02:16:55.959872 systemd[1]: Started sshd@19-10.0.0.94:22-10.0.0.1:58134.service - OpenSSH per-connection server daemon (10.0.0.1:58134). Dec 16 02:16:55.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.94:22-10.0.0.1:58134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:55.960919 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:16:55.960976 kernel: audit: type=1130 audit(1765851415.959:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.94:22-10.0.0.1:58134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:56.014000 audit[5119]: USER_ACCT pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.015010 sshd[5119]: Accepted publickey for core from 10.0.0.1 port 58134 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:16:56.018029 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:16:56.016000 audit[5119]: CRED_ACQ pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.021190 kernel: audit: type=1101 audit(1765851416.014:870): pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.021268 kernel: audit: type=1103 audit(1765851416.016:871): pid=5119 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.022247 kernel: audit: type=1006 audit(1765851416.016:872): pid=5119 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 16 02:16:56.022710 systemd-logind[1526]: New session 21 of user core. Dec 16 02:16:56.016000 audit[5119]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb7e4360 a2=3 a3=0 items=0 ppid=1 pid=5119 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:56.026461 kernel: audit: type=1300 audit(1765851416.016:872): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffb7e4360 a2=3 a3=0 items=0 ppid=1 pid=5119 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:16:56.026518 kernel: audit: type=1327 audit(1765851416.016:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:56.016000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:16:56.035694 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 02:16:56.037000 audit[5119]: USER_START pid=5119 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.041614 kernel: audit: type=1105 audit(1765851416.037:873): pid=5119 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.041000 audit[5123]: CRED_ACQ pid=5123 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.045630 kernel: audit: type=1103 audit(1765851416.041:874): pid=5123 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.140640 sshd[5123]: Connection closed by 10.0.0.1 port 58134 Dec 16 02:16:56.141165 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Dec 16 02:16:56.141000 audit[5119]: USER_END pid=5119 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.145358 systemd[1]: sshd@19-10.0.0.94:22-10.0.0.1:58134.service: Deactivated successfully. Dec 16 02:16:56.142000 audit[5119]: CRED_DISP pid=5119 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.148172 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 02:16:56.149317 systemd-logind[1526]: Session 21 logged out. Waiting for processes to exit. Dec 16 02:16:56.149414 kernel: audit: type=1106 audit(1765851416.141:875): pid=5119 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.149470 kernel: audit: type=1104 audit(1765851416.142:876): pid=5119 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:16:56.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.94:22-10.0.0.1:58134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:16:56.150676 systemd-logind[1526]: Removed session 21. Dec 16 02:16:56.521756 kubelet[2693]: E1216 02:16:56.521711 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79dfb47d67-2sm7d" podUID="940a1e00-e3f0-45f9-b45b-33acce551ddd" Dec 16 02:16:57.522075 kubelet[2693]: E1216 02:16:57.521812 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cc876f4-x89dv" podUID="7aeba5eb-89d7-4d56-af0a-38d8908b6a09" Dec 16 02:16:57.522428 kubelet[2693]: E1216 02:16:57.522321 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xd2kv" podUID="fdb43fea-30e5-4f4c-8d7c-cf0f6c47a9ff" Dec 16 02:17:01.166058 systemd[1]: Started sshd@20-10.0.0.94:22-10.0.0.1:50712.service - OpenSSH per-connection server daemon (10.0.0.1:50712). Dec 16 02:17:01.169918 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 02:17:01.169965 kernel: audit: type=1130 audit(1765851421.164:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.94:22-10.0.0.1:50712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:17:01.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.94:22-10.0.0.1:50712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:17:01.237000 audit[5138]: USER_ACCT pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.238906 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 50712 ssh2: RSA SHA256:rH1sOPbnKIrlCFQFTuacxecfE1BEKAl/Bfev/eSdaO4 Dec 16 02:17:01.241360 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 02:17:01.239000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.245686 kernel: audit: type=1101 audit(1765851421.237:879): pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.245738 kernel: audit: type=1103 audit(1765851421.239:880): pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.248003 kernel: audit: type=1006 audit(1765851421.239:881): pid=5138 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 16 02:17:01.248068 kernel: audit: type=1300 audit(1765851421.239:881): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd43f73d0 a2=3 a3=0 items=0 ppid=1 pid=5138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:17:01.239000 audit[5138]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd43f73d0 a2=3 a3=0 items=0 ppid=1 pid=5138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 02:17:01.250246 systemd-logind[1526]: New session 22 of user core. Dec 16 02:17:01.239000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:17:01.252691 kernel: audit: type=1327 audit(1765851421.239:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 02:17:01.255757 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 02:17:01.256000 audit[5138]: USER_START pid=5138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.261000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.266143 kernel: audit: type=1105 audit(1765851421.256:882): pid=5138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.266258 kernel: audit: type=1103 audit(1765851421.261:883): pid=5142 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.359520 sshd[5142]: Connection closed by 10.0.0.1 port 50712 Dec 16 02:17:01.360206 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Dec 16 02:17:01.360000 audit[5138]: USER_END pid=5138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.365140 systemd[1]: sshd@20-10.0.0.94:22-10.0.0.1:50712.service: Deactivated successfully. Dec 16 02:17:01.360000 audit[5138]: CRED_DISP pid=5138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.366948 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 02:17:01.368390 systemd-logind[1526]: Session 22 logged out. Waiting for processes to exit. Dec 16 02:17:01.368510 kernel: audit: type=1106 audit(1765851421.360:884): pid=5138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.368548 kernel: audit: type=1104 audit(1765851421.360:885): pid=5138 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 02:17:01.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.94:22-10.0.0.1:50712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 02:17:01.369195 systemd-logind[1526]: Removed session 22. Dec 16 02:17:02.521581 kubelet[2693]: E1216 02:17:02.521506 2693 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lddqc" podUID="c7d46623-447d-4a3b-a433-802c6ce8e063"